markov properties of a markov process

18
Z. Wahrscheinlichkeitstheorie verw. Gebiete 55, 313-330 (1981) Zeitschrift for Wahrs cheinlichkeitstheorie und verwandte Gebiete Springer-Verlag 1981 Markov Properties of a Markov Process R.K. Getoor* and M.J. Sharpe* Mathematics Department University of California, San Diego, La Jolla, California 92093, USA 1. Introduction Let X =(f2, ~, ~t, Xt, Or,px) be a right (Markov) process with state space E. After replacing the original topology on E with the associated Ray topology, this amounts to saying that X is right continuous, strong Markov with semigroup (Pt) mapping Borel functions to Borel functions. The reader more interested in specific processes may find it helpful to think always of X as a Hunt process [4]. The problem addressed here is the characterization of those random times R at which X has some kind of Markov property of conditional independence of past and future, given the present. Because X is assumed strong Markov, every stopping time R has a Markov property with the a-algebra ~[R~ of the present being defined to be that generated by X R. Other examples of random times having a similar Markov property include coterminal times (Pittenger and Shih [16]) and reconstructable co-optional times [11]. In this paper we examine a number of variants of the method used in [11], all of which may be though of as giving Markov properties relative to a-fields of the present which may be different from o-(XR). Among these will be left germ field ~[R-] at R which may be defined, up to null sets, as that generated by random variables f(XR)_ with f e-excessive, and the intermediate germ field ~ta-,Rl =~[R-~ v ~tR]. In a number of important special cases, ~[R-] reduces to just a(XR_ ) and in this case ~[R_,R]=a(XR_, XR). Section 2 contains the definitions and a discussion of the fundamental a- algebras. To each of the a-algebras of the present there corresponds a Markov property which in the respective cases ~[R], ~tR-], ~[R ,R] is called the Markov property (MP), left MP, intermediate MP. These properties and the relations between them occupy Sect. 3. Examples described there show that neither the MP nor the left MP implies the other, but the MP implies the intermediate MP and, with a mild additional hypothesis, the left MP implies the intermediate MP. * Research supported in part by NSF Grant MCS79-23922. 0044-3719/81/0055/0313/$03.60

Upload: r-k-getoor

Post on 10-Jul-2016

241 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Markov properties of a Markov process

Z. Wahrscheinlichkeitstheorie verw. Gebiete 55, 313-330 (1981)

Zeitschrift for

Wahrs cheinlichkeitstheorie und verwandte Gebiete

�9 Springer-Verlag 1981

Markov Properties of a Markov Process

R.K. Getoor* and M.J. Sharpe*

Mathematics Department University of California, San Diego, La Jolla, California 92093, USA

1. Introduction

Let X =(f2, ~, ~t, Xt, Or, px) be a right (Markov) process with state space E. After replacing the original topology on E with the associated Ray topology, this amounts to saying that X is right continuous, strong Markov with semigroup (Pt) mapping Borel functions to Borel functions. The reader more interested in specific processes may find it helpful to think always of X as a Hunt process [4]. The problem addressed here is the characterization of those random times R at which X has some kind of Markov property of conditional independence of past and future, given the present. Because X is assumed strong Markov, every stopping time R has a Markov property with the a-algebra ~[R~ of the present being defined to be that generated by X R. Other examples of random times having a similar Markov property include coterminal times (Pittenger and Shih [16]) and reconstructable co-optional times [11].

In this paper we examine a number of variants of the method used in [11], all of which may be though of as giving Markov properties relative to a-fields of the present which may be different from o-(XR). Among these will be left germ field ~[R-] at R which may be defined, up to null sets, as that generated by random variables f(XR)_ with f e-excessive, and the intermediate germ field ~ta-,Rl =~[R-~ v ~tR]. In a number of important special cases, ~[R-] reduces to just a(XR_ ) and in this case ~[R_,R]=a(XR_, XR).

Section 2 contains the definitions and a discussion of the fundamental a- algebras. To each of the a-algebras of the present there corresponds a Markov property which in the respective cases ~[R], ~tR-], ~[R ,R] is called the Markov property (MP), left MP, intermediate MP. These properties and the relations between them occupy Sect. 3. Examples described there show that neither the MP nor the left MP implies the other, but the MP implies the intermediate MP and, with a mild additional hypothesis, the left MP implies the intermediate MP.

* Research supported in part by NSF Grant MCS79-23922.

0044-3719/81/0055/0313/$03.60

Page 2: Markov properties of a Markov process

314 R.K. Getoor and M.J. Sharpe

The principal technique used to obtain Markov properties is an absolute continuity theorem for additive functionals given in Sect. 4. The main results come in Sect. 5. Starting with the rather simple observation that every co- optional time has the left MP, we give a condition that a random time obtained by mixing optional and co-optional times have the intermediate MP. Part of the motivation here comes from the work of Jacobsen and Pitman [13] (see also Jacobsen [12]) where it was shown that in the setting of Markov chains with countable state spaces, a random time R (integer valued, of course) has the MP if and only is there exist sets F n ~ n and G ~ such that {R=n}=Fnc~O~-l(G). Though we are not able to give such a specific characterization, our result (5.7) gives one reasonable continuous parameter analogue of times of the above form. The connection is discussed more fully in (5.13). One reason for discussing times which are co-optional up to a terminal time comes from the continuous parameter analogue [17] of another result of [13] characterizing times with the left MP which are killing times for a Markov process.

Section 6 contains a discussion of Markov properties in which information generated by R itself is adjoined to the a-field of the present. The results here can be read off the results of Sect. 5 by applying the latter results to an appropriate space time process. By this artifice, results of Kallenberg [14] can be reduced to results on Markov properties of co-optional times.

We refer the reader to [8] for the basic properties of right processes; a much more extensive treatment is contained in the forthcoming book [18]. We shall give detailed references to this book, which contains all the results on additive functionals, homogeneous processes and their projections and dual projections which provide the essential techniques of this paper.

2. Past, Future, and Present

We fix once and for all a right process X as described in the introduction. One may suppose that for each co, t~X,(co) has left limits on ]0. ~ [ existing in the Ray-Knight compactification of E. As usual X t _ denotes this left limit, but, in general, when X t_ : # X , X,_ may not be in E. Following [18], let .3 denote the ideal of (real valued) processes Z--(Z~(co)) which are evanescent for each P~. A process is measurable if it is measurable with respect to the a-algebra on IR + x generated by .3 and ~3 + x ~. Here IR + = [0, oo [ and ~3 + is the a-algebra of Borel subsets of IR +. The optional a-algebra !~ on IR+x f2 is defined to be the a- algebra generated by .3 and the class of processes adapted to (~t) having paths which are right continuous with left limits. The predictable a-algebra ~3 on IR + x (2 is generated by ~ and left continuous processes adapted to ~c See w of [18].

A random time R is an ~ measurable random variable with values in [0, oo]. It is known (see (3.10) of [18]) that there exists R*: f2~[0, oo] which is ~* measurable such that R = R* almost surely (that is, almost surely pu for each #). Here ~* is the a-algebra of universally measurable sets over ((2, ~o). Thus in discussing Markov properties there is no loss of generality in assuming that R ~ ~* and we make this assumption in the remainder of this paper.

Page 3: Markov properties of a Markov process

Markov Properties of a Markov Process 315

In defining a o-algebra (5 it is often convenient to describe the class, denoted by (5 again, of functions which are (5 measurable. Bearing this in mind we define the a-algebras J_<R on {R<oo} and J<R on { 0 < R < c ~ } associated with a random time R as follows:

(2.1) J__<R = {ZR I{R< co}: Z e ~D},

(2.2) J<R = {ZR I{0<R< oo}: Z e ~}.

Since ~ c ! ~ , J<R is contained in the trace of J__<R on { 0 < R < oo}. Evidently the restriction of R to { 0 < R < o o } is J<R measurable and the restriction of R to {R<oo} is J<R measurable. In the future we shall just write J<RcJ__<R or R e j < R and leave it to the reader to supply the necessary but obvious qualifying phrases. For example, J< R c J =<R c J" Moreover, X R e J=< R and X R_ e J < R are meaningful and true statements. Of course, X R is defined only on {R< oo} and X R_ only on { 0 < R < oo}. One should think of J_<R (resp. J<R) as containing the information in the process X on the interval [0, R] (resp. ]0, R D. If T is a stopping time, then J < r is the trace of J r on { T< oo} and J < r the trace of J r - on {0< T < oo} where J r and J r - are the familiar o--algebras associated with a stopping time.

It should be observed that the definition above of J=<e, which is just a reformulation [11] of that of Pittenger and Shih [16], differs from another widely used o-algebra, J=<R+ for the past up to R due to Chung and Doob [5]. J<R+ is defined to be the o--algebra of sets A ~ J such that for every t > 0 there exists Ate j , with A c~ {R < t} = A t c~ {R < t}. It is shown in [15] that if R is honest (i.e., R is J=<R+ measurable), then

q~R+ = {YR I{R< oo}: Y progressive over (J,)}.

It is easy to see ([16], for example) that if R is honest then J__<R+ = (") J<=R+t, t>0

whence the notation. In order to define the future fields associated with a random time, we first

introduce two additional o--algebras on IR + x f2. A process Z defined on IR + x is called perfectly homogeneous on IR + (resp. on IR ++ =]0, oo[) if there exists a set f20 e j with PU(f2o)=0 for all # and if for each coef2-f2o, Z~(O~co)=Zt+ ~ for all t>0 , s > 0 (resp., all t>O, s>O). (A set A e j with P" (A)=0 for all # will be called a null set and the class of all null sets will be denoted by 9l.) Following Az6ma [1,2] (see also [18, w we let .~d denote the a-algebra on IR + xf2 generated by .3 and all processes which are measurable, perfectly homogeneous on IR +, and are right continuous with left limits. Similarly S3 g denotes the a- algebra on ]R + x(2 generated by .~ and all measurable process which are perfectly homogeneous on IR ++ and are left continuous with right limits. One may show that 90ac~ g, [18, w Since l~o, oolE~ g, it follows that Z~S3 g if and only if Z lllo, c~lff~ g and Z o e J .

We next define o.-algebras J>R on {R< c~} and J=>R- on { O <R< oo} by

(2.3) J__> R = {ZR 1 {R < o<: Z e 5 d}

(2.4) J>R- ={ZR l{o<R<oo}: Z e ~ g } �9

Page 4: Markov properties of a Markov process

316 R.K. Getoor and M.J. Sharpe

Since .~acS3 g, 3__>Rc3__>R_c3. Note XRE3_> R and Xg_e3>__R_, but that in general, R is not 3>__g_ measurable.

In an earlier paper [111 we introduced the field gig defined on {R< oo} by

(2.5) gig = {F o 0g I{R < ~}: F e 3"},

and used it as the field of the future at R. The following argument shows that 3>__R and gig differ only by P" null sets, for any initial law #. To begin with, let # be an initial law and let Z e b ~ J . It was shown in [18, w that there exists F e b 3 ~ such that Z t and F o 0 t are P"-indistinguishable, and so Pu{ZR4=F o Og; R < o o } = 0 . On the other hand, if F e b 3 * , let Q be the measure defined on (0 ,3*) by Q(G)=EU{GoOR; R<oo} . Choose F ~ ~ with Q ( F . F ~ so that E ~ { F o O R =t: F~ o OR; R < oo} =0. Finally, note that t ~ F~ 0 t defines a pro- cess in ~e.

If F is only 3 measurable, then Fo ORl{g<oo} is not 3 measurable in general. This is the reason for using 3" in (2.5). For technical reasons and symmetry considerations we prefer to define 3_>R as in (2.3).

Having disposed of the past and the future we now turn to the present. We define 3tg] on {R< oo} by

(2.6) 3t~l = {Z~ 1{~ < oo}: Z ~ ~a c~ s

and 3~-1 and 3tg ,m on { 0 < R < o o } by

(2.7) 3tR-i = {Z~ 1{o < g < ~}: Z ~ 53 g c~ ~3}

(2.8) 3t~- , m = {Zg 1~o < ~ < ~}: Z e S3 g m s

The following inclusions are evident:

It is known that s ~ is generated by .3 and by processes f ( X ) where f ~ G ~ for some e. See [18, w Here G ~ is the class of a-excessive functions. Consequently 3FR1 is generated .by 91 and f (XR) with f s G ~ for some e. Thus, suggestively, one should think of 3ER1 as generated by X R. We call 3tRl the present o--field at R. The field 3tg- i is called the left germ field at R. If f is e- excessive, let f ( X ) _ be the process defined by f (X t )_ =l imf (Xs) l lo , oot(t ). One

s ~ t knows that the trace of ~3~ ca ~ on IR § x f2 is generated by .~ (more exactly the trace of .~ on IR ++ x O) and the processes f ( X ) _ with f ~ G~ for some e. See [18, w Hence 3tR-I is generated by 9l and f (Xg)_ with f E G ~ for some e. The field 3tR-,R1 is called the germ field at R. It is proved in [18, w that ~3gc~ = (.~ c~ 9 ) v (S3 g c~ ~), and this gives

(2.9) 3ER-, gl = 3tR--] v 3tR]"

Consequently 3tR-,R~ is generated by 9l, f (Xg) , and f ( X g ) _ with f E G~ for some e. Let fie= a ( ~ ~ ) be the a-algebra generated by the e-excessive functions

Page 5: Markov properties of a Markov process

Markov Properties of a Markov Process 317

for all e > 0. Then similarly the fact that s = ~3 v a(f(X): f e @e), see [18, w 23], gives

(2.10) ~__<R= {~<R v ~tR~,

while the fact that ~3 v .~d is the a-algebra of all measurable processes, [18, w 26], gives

(2.11) ~ = ~ < R v ~>__R.

It seems reasonable to conjecture that ~>=R=~>=RV~[R _] and this would follow if S3 g = S3dv (~gc~ ~3). However, we have not been able to establish this last relationship.

If X is a Hunt process satisfying the duality hypotheses of w of [-4], then ~[R_]~-a(XR _) and ~[R_R]~a(XR _, XR). In particular this is the case if X is a L6vy process in ]R d for which Us(0, dx) is absolutely continuous with respect to Lebesgue measure for some, and hence all, ~ > 0.

3. Markov Properties

Roughly speaking a "Markov property" at a time R should be an assertion that the past and future are conditionally independent given the present. Since' we have introduced two pasts (~=<R,{}<R), tWO futures ({~=>R,~_>R-), and three presents (~tR1'~tR-J'~tR-,RJ) one might consider (at least) twelve Markov properties. We single out three of these possibilities for special consideration. The following definition is a little stronger than the above rough statement because it demands a lack of dependence on the initial law.

(3.1) Definition. Let R be a random time. Then R has the Markov property (resp., intermediate Markov property, left Markov property) if for every Heb{}>=R there exists l~b~[R] (resp. bq~[R_,R],b~[R_]) such that for every initial law #

(3.2) E"{GH; 0 < R < oo} =EU{G/7; 0 < R < oo}

for every GEb~<=R (resp. b~<=g, b~<R).

Recalling (2.5) and the ensuing discussion, a routine completion argument shows that it suffices to check the conditions in (3.1) for all positive HEb~~ (rather than ~>R) where

0 ~ > R = { F O 0 R I { R < o o } : F E ~o}.

There is a slightly stronger form of Markov property at R which is occasionally useful. We shall say that R has the Markov property in kernel form if there exists a kernel Kt, q from (s ~tm) to (f2,~ ~ such that for every Heb~j ~ and every initial law #,

( 3 . 3 ) EU[HoORI~<=R]=KtR~(H) on {O<R< oo}.

Page 6: Markov properties of a Markov process

318 R.K. Getoor and M.J. Sharpe

Similar terminology will be used in connection with the intermediate and left Markov properties.

If the (separable) underlying space (~2, 3 ~ is an absolute Borel space - that is, isomorphic in the measure theoretic sense to a U-space ([18], A2) with its Borel a-field - then a standard kernel argument ([9] or [18], A3) shows that the Markov property at R implies the Markov property in kernel form. The above condition on (~2, 3 ~ is certainly satisfied if, say, the state space E is Lusinian (or even coSouslinian) and f2 is the space of all right continuous maps of IR + into E. See [7, IV.19] for these matters. Note that if KtR 1 satisfies (3.3), it is immediate that if H~b~* then KER1(H)~==_R= ('-] ~ R , where ~ R is the o--algebra gener-

;z

ated by ~_-<R and all P" null sets, and that for each #

(3.4) E ~ [H o OR ] ~ <= R] = K[RI (H) o n {0 < R < oo }.

In general, one has only ~ R c ~<R, although these a-algebras are identical if R is a stopping time ([18, (6.25)]).

The following reformulation of definition (3.1) is technically very useful. Here ea denotes unit mass at a.

(3.5) Theorem. Let R be a random time. Then R has the Markov property (resp., intermediate MP, left MP) if and only if for each Z ~B d with 0 <_Z <-1, there exists Z~!~c~B d (resp., ~)c~B g, ~3nB g) such that the random measures Z*eRI{o<R<~o} and Z*eRI{0<R<~ } have the same dual optional (resp., dual optional, dual predictable) projections.

Proof. We shall give the argument only in the Markov property case; the other cases are handled analogously. Let H E b ~ ~ with 0_H_< 1. Then Zt=Ho 0 t is in B d. Let Z correspond to Z as in (3.5). Then for every Yeb~ , using the properties of dual projections one has

E~[YR Ho OR; O < R < o o ] = E " ~ YtZteR(dt) l{o<R<oo} =EU ~ Yt2t~R(dt) I{O~R< oo}

=E"[YRZR; 0 < R < oo].

But Z R ~ ~tRl and so R has the Markov property. Conversely, if Z 6 B e, 0_< Z_< 1, apply (3.1) to H=ZReb~>=R to obtain H~tR1 for which (3.2) holds. But H = Z R for some 2~!~ c~B e and reversing the above steps shows that 2 has the desired properties.

We pause to introduce some additional notation that will be used in the sequel. Let O~ and O~ for t > 0 be the shift and dual shift defined on processes Z and random measures K by (see [18, w and w

OtZ(s , co)= l[t, oo[(s)Z(s-t , 0too); s~0 ,

(Ottj(eo, B)=~(Otco, ( B - t) c~ [0, oo[); B ~ B +.

It is easily checked that O,(Z* ~c)=(OtZ )*(Ot~). If Z is a measurable process and ~c is a random measure (subject to appropriate finiteness conditions) we let ~ (resp. PZ) denote the optional (resp. predictable) projection of Z and ~c ~ (resp.

Page 7: Markov properties of a Markov process

Markov Properties of a Markov Process 319

~:P) the dual opt ional (resp. predictable) project ion of ~c. See [-18, w and w 31]. A basic proper ty is that these projections commute with shifts, that is, if T is a stopping time, then ([18, w and w

O(OTZ) = OT(OZ); liT. ~ P(OTZ) = I~T, o~ OT(pZ)

(OT~C) ~ Or(~:O); l~r, ooE * (69r K)~ = l t r , oo~ Or(~: p)

where these equalities hold up to evanescence. Since X is a strong Markov process one might suppose that if T is a

stopping time and R has a Markov property, then so does T + R o O r. However, in our definitions the Markov propert ies say nothing about what happens on {R =0} or, of course, on { R = oo}.

(3.6) Corollary. Let R be a random time and T a stopping time. Let S = T + R o O r if T<oo and R o 0 r > 0 and let S=oo otherwise. I f R has the Markov (resp. intermediate Markov, left Markov) property, then so does S.

Proof. First observe that

O T(eR I{O<R< ce})= t?'T + RoOT l{r< o0; O < RoOT < 00}

= es 1{o <s< ~} = 117, o~ff * es 1~o <s< ~

because T < S when S < o o . Let Z E ~ g with O<_ZG1. Then I ? T , ~ O r Z = 1U. oo~Z. Suppose R has the left Markov proper ty and given Z ~.~d, O_<Z G 1, let Z ~ ~ c~.~g be such that

( Z * eR 1{o < g < oo}) p = ( 2 * e R 1{0 < g < oo~) v ---- Z * (eg 1{0 < g < o~0 v.

Using the properties of dual predictable projections, one has

(Z * e s 1{o <s< ~})P

= (liT, o o l O T Z * (gT(eR 1{o <R < ~}))P = lIT, oolr O T ( Z * eR 1{o< R< oo}) p A - - - - A

= l t T ' ~ Or[ Z * (e R 1{o <R< oo})] p = l~r" o~ Z * (Ore R lto < R< ~})P

= 11 T, oo~ 2 * (e s 1~0 < S < oo}) p = (2 * eS 1{0 < S< oo}F"

Therefore S has the left Markov property. The other cases are handled similarly. Some random times R have the Markov proper ty on {R<oo} . This is

certainly the case if R is a stopping time, in which case the Markov proper ty holds in kernel form, or if R is coterminal [11, 16]. In this case, (3.6) extends to give the Markov proper ty of T + R o O r on {T + R o OT < 00} .

It is obvious that if R has the MP, then R has the intermediate MP. It is not quite so obvious that the left M P implies the intermediate MP. Indeed, our p roof demands a slightly stronger hypothesis.

(3.7) Proposition. Let R have the left M P in kernel form. Then R has the intermediate MP.

Proof. Let K denote the kernel K[R_] f rom (f2, ~[R-]) to (~c2, ~0) such that

E' {HoORI~<R}=K(H) on { 0 < R < oo}.

Page 8: Markov properties of a Markov process

320 R.K. Getoor and M.J. Sharpe

Fix H ~ ~o with 0-< H-< 1. Then f - - ,K [ f (Xo)HI defines a kernel from (O, ~tR-l) to (E, ~) which is dominated by f--->K[f(Xo) ]. By Doob's lemma there exists tp e ~tR-] x ~ with 0_<_ r < 1 such that for each f ~ b~

K [f(Xo)H] = ~ K( ' , dd ) f ( X o (co')) 0( ' , Xo (co')).

Now define/~(o)) = ~ (a), XR(O~))e ~tR-,RJ" A routine monotone class argument starting with products qo(co, x)=G(co)g(x) yields for all cpEb(~ER_1x~) and initial laws #

E,[~o(., x~(.))l ~< ~] = ~ K(., dco')~o(., Xo(o'))

on {0 < R < oo}. Consequently

E ' [ f ( X n ) g I ~ < R ] = K [ f ( X o ) H ] on { 0 < R < oo}.

Therefore, for each Gsb~<R,

E'[Gf(XR)HO OR; 0 < R < oo] =E'[GK[Hf(Xo)]; 0 < R < oo]

=E'[Gf(XR)H; 0 < R < oo].

But products of the form Gf(XR) with G s b ~<R and f e b ~ generate ~__<R up to completion ([18, w and this proves that R has the intermediate Markov property.

(3.8) Remark. In contrast with (3.7) routine conditional independence argu- ments show that if ~=>R and ~<R are conditionally independent given ~tR-I relative to W, then ~_->R and ~__<R are conditionally independent given ~tR-,R1 relative to pu. But because our definition of the intermediate MP demands a lack of dependence on the initial law this does not quite give that the left MP implies the intermediate MP.

4. Some Results on Additive Functionals

In this section we collect some facts about additive functionals that will be used in the sequel to obtain sufficient conditions for certain random times to have a Markov property. The results of this section are either known or are easy consequences of known results. However, there does not seem to be a con- venient reference for them. Actually we shall need results for additive functionals of (X, T) where T is an appropriate terminal time, and so we begin by defining such times.

We follow the terminology of [18, w where a set A c E is called nearly optional for X if the process t~lA(Xt) is optional relative to (~ut,PU) for every initial law #.

(4.1) Definition. A right terminal time is a stopping time T such that reg(T)= {x: PX{T=0} = 1} is nearly optional for X and for every stopping time S, T = S + T o O s a.s. on {S<T}.

Page 9: Markov properties of a Markov process

Markov Properties of a Markov Process 321

Every exact terminal time is a right terminal time. It is shown in [19] or [18, Chap. VII] that if T is a right terminal time then T is a.s. equal to the perfect terminal time T/x S where T is the perfect exact regularization of T and S = i n f { t > 0 : X t ~ r e g ( T ) } is the debut of reg(T). In addition, the process 111o, r~lreg(r)(X) is evanescent, so T < T if and only if T > 0 and X o e r e g ( T ). Right terminal times are shown in [18] to have many of the desirable properties of exact terminal times. The above remarks show that right terminal times are perfectable, and killing a right process at a right terminal time gives again a right process, justifying its name.

(4.2) Definition. Let T be a right terminal time. A raw additive functional ( R A F ) o f (X, T) is a right continuous increasing process A =(At)t>__ o with A o = O, A t e ~ for all t >__ O, A t < oo i f t < T, A r_ = lim A t = oo if A r = o% and such that for

tT~T each s, t >= O,

(4.3) At+s=At+l ro , r[(t)Aso 0 t a.s..

If , in addition, A is adapted to (~t), then A is an additive funct ional ( A F ) of (x, T).

Remarks. Since T may be perfected, the standard perfection arguments of Walsh ([193 or [18, Chap. VIII) allow one to perfect A. Hence one may assume that (4.3) holds identically. Clearly the right continuity of A and (4.3) imply that A is constant on IT, oo[, although A r_ < A r is allowed. But A A T < oo since At_ =A r if A t = oo. Therefore the process A A =(AAt) is well defined and finite on ]0, oo[. Of course, Aoo = lim A, and so A r is defined on {T = oo}.

tT$oo

(4.4) Proposition. Let A be an A F of (X, T), with T a right terminal time. 77~en there exists Y e ~ ) c ~ ) ~ such that Yl~o,r ~ and A A are indistinguishable. I f A is predictable, Y may be chosen in ~ c~ ~ .

Proof. We may assume A is perfect. Then the process Z t = A A ~ is optional and perfectly homogeneous on ]]0, T[[; that is, Z t . s = Z t o O s for all t > 0 and s<T . Also Z is the limit of the left continuous process Z~=A(t+~/n)_-A~_ where oo - o o is defined to be 0. One invokes now one of the homogeneous extension theorems of [18, w to obtain Y ~ O n ~ ~ such that YI~0, T~=Z. The predict- able case is obtained by a similar argument.

We come now to the main result of this section, which extends results from [3] and [63. See also [18, w Recall that if A and B are increasing processes, then one says that B strongly dominates A provided B - A is increasing.

(4.5) Theorem. Let T be a right terminal time and let A and B be AF's of (X, T) with B strongly dominating A. Then there exists Z ~ 0 c~.~g with A = Z * B. I f B is continuous one may take Z = f (X) with f an optional function. I f A and B are predictable, one may take Z e ~ c~ ~ .

Proof. The (by now) standard proof of Motoo's theorem applies to the present situation and yields the continuous case. See [3] or [18, w For the remaining assertions decompose A and B into their continuous and discontinuous parts, and use (4.4) on the discontinuous parts.

Page 10: Markov properties of a Markov process

322 R.K. Getoor and MJ. Sharpe

Left additive functionals ([2] or [18, w have a nice absolute continuity theorem which will, in w 5, lead to Markov properties for certain times.

(4.6) Definition. Let T be a right terminal time. A raw left additive functional ( R L A F ) of (X, T) is a left continuous increasing process A with A0=0, but possibly A o + > 0 , Ate ~ for each t>O, At<oo if t<T, such that for ever), stopping time S and ever 3, t > O,

(4.7) At+s=As+ I[o, Tr(S)A~o 0 s a.s..

If, in addition, A is adapted to (~t) then A is a left additive functional ( L A F ) of (X, T).

If A is a RLAF of (X, T) then t--~A t is constant on [T, oo[ and hence, by left continuity, AAT=O. In addition, for every stopping time S, A A s = A s + - A s =Ao+ o 0 s a.s. on {S<T}. The following result is obtained by exactly the same method as for ordinary LAF's in [18, w

(4.8) Theorem. Let T be a right terminal time and let A ,B be LAF's of (X, T) with B strongly dominating A. Then there exists f e~* with O<=f < 1 such that A = f ( X ) * B. I f x~EXAo+ and x--*EXBo+ are both in ~ , f may be chosen in R e.

Having only f e ~ * rather than f e t e will cause minor difficulties with measurably of f(XR), for a ra.ndom time R. Though f (XR) is not in ~[R] in general, it is in the universal completion of ~[R]' We shall ignore this familiar problem.

We point out one special case of the absolute continuity theorem (4.5) which is useful in examples. An AF A of (X, T) is called quasi-left-continuous (qlc) provided AA is carried by J={(t,{o): Xt(co)4=Xt_(oo); Xt_(oo)~E}, where X t_ means the left limit of X taken in the Ray topology (more precisely, in the Ray topology of (X, T)). However, it is known that J does not depend (up to evanescence) on the topology on E used to compute the left limit as long as X is a right process in that topology. Therefore, for a right process, qlc may be defined without reference to the particular Ray topology involved.

It is known ([3] or [18, w that if A is qlc then there exists a function f e l i x @ vanishing on the diagonal of E x E such that AA is indistinguishable from l~o,r~ f ( X _ , X), where X _ =(Xt_)t>o.

One then has the following obvious special case of (4.5).

(4.9) Proposition. With the hypotheses of (4.5) in force, if B is qlc then there exists f as above with A = f (X_, X) * B.

5. Some Sufficient Conditions for Markov Properties

Because we are assuming X to be a right process, every stopping time has the Markov property. The following example shows that stopping times need not have the left Markov property. Let X be a regular step process with four states a,b,c,d, and assume that X is irreducible and recurrent. Because ~[R-] is generated by f ( X R ) _ with f e ~ ~ for some ~ > 0 and excessive functions are

Page 11: Markov properties of a Markov process

Markov Properties of a Markov Process 323

continuous in this case, ~ER-1 is generated by XR_. Now let T be the time of first jump from a to b following a visit to c and let S be the time of first jump from a to d. Then R = S/~ T cannot have the left Markov property, for X R _ --a implies that ~R-~ is trivial, and the left Markov property at R would imply independence of ~<R and ~_>R' But X R is obviously not independent of {X hits c before R} S~<R"

Some stopping times do have the left Markov property. For example, if R is a predictable stopping time, then given H e b ~ ~ set h(x)=EX(H) and observe that on {R < oo}

= E x {h (XR)[ ~ < R} = Po h (X R_)

where (i~) is the Ray semigroup associated with (P~). See [8] or [18, w It will follow from later results in this section that if R is a right terminal

time (4.1), then R has the left Markov property. There are analogues of these results for co-optional and coterminal times which are, intuitively, the time reverses from the process lifetime of stopping times and terminal times, re- spectively. It was shown in [11] that co-terminal times and reconstructable (in essence, co-predictable) co-optional times have the Markov property. It will be shown below that all co-optional times have the left Markov property. An example was given in [11] of a co-optional time not possessing the Markov property.

(5.1) Definition. Let T be a right terminal time. A random time R is co-optional for (X, T) if R<=T and if, for every t>O, R o O t = ( R - t ) + a.s. on {t< T}.

It is easy to see that if R is co-optional for (X, T) then A t = I~R ' ~t(t)1~0 < R< ~ is a RAF of (X, T).

(5.2) Theorem. Let R be co-optional for (X, T). Then R has the left Markov property and the intermediate Markov property.

Proof. Let At=lrR,~t(t) llo<R<~i, a RAF of (X, T). Given Z ~ g with 0 <Z-< 1, Z * A is also a RAF of (X, T). Because dual projections commute with shifts, ( Z , A ) ; is a predictable AF of (X, T). See [18, w But ( Z , A ) p is strongly dominated by A; so by (4.5) there exists YE ~3 c ~ g with 0_< Y_< 1 and (Z * A) v = Y * A P = ( Y * A ) ;. It now follows from (3.5) that R has the left Markov property. The intermediate case is only trivially different.

(5.3) Remarks. In the proof of (5.2) we used z e ~ g while (3.5) demands only Z s ~ e. Thus actually we have proved that ~<R (resp. ~__<R) and ~_>g- are conditionally independent given ~tg-~ (resp. ~tR-,m) in the left (resp. in- termediate) case. This phenomenon will occur again. Since T is co-optional for (X, T), all right terminal times have both the left and intermediate MP. A more precise result was proven in [20] for Hunt processes.

We now list some consequences of (3.5) and (4.5). The required proofs are completely analogous to that of (5.2) and are left to the interested reader. Let R be co-optional for (X, T) where T is a right terminal time and let A~=A~

Page 12: Markov properties of a Markov process

324 R.K. Getoor and M.J. Sharpe

=l[R,~[(t)l{O<R<oo} SO that A ~ is an AF of (X, T). Note that if S is a stopping time, then

EU[AA~ 0 < S < o o ] = P U [ 0 < R = S < oe].

(5.4) If A ~ is continuous, then R has the Markov property. Moreover, A ~ is continuous if and only if pu [-0 < R = S < ~ ] = 0 for all # and all stopping times S. If X is such that all adapted additive functionals are continuous (e.g. Brownian motion), then all co-optional times for X have the Markov property.

(5.5) If A ~ is qlc, then one may replace ~[R-,R] by (7(XR_,XR) in the in- termediate Markov property. Moreover A ~ is qlc if and only A p is continuous which in turn holds if and only if W [ O < R = S < o o ] = O for all predictable stopping times S.

(5.6) If X is a Hunt process satisfying duality hypotheses (e.g. a L6vy process with absolutely continuous resolvent), then, as mentioned at the end of w ~tR-J = a ( X a _ ) and ~tR_R~=a(XR_,XR). In this case the left and intermediate Markov properties at R take on an especially nice form.

The next result gives an example of mixture of optional and co-optional times that have the intermediate Markov property.

(5.7) Theorem. Let R be a random time and M ~ c ~ g. Define R 1 and R 2 by [[R1]]=[[R~c~M and I[Rz~=[[R]]~M C. Suppose that there exist Z l ~ g + , Z 2 6 ~ + , A 1 an optional increasing process, and A 2 a R A F of (X, T) for some regular terminal time T, such that

8R 1 l{o<R~< ~} = z l *A 1

8R z 1{0<R2<cr =Z2 * A 2.

Then R has the intermediate Markov property.

Proof. Let Y~ .~g with 0 < Y < 1. Then

Y* eg l{o<R< ~ = Y* eR1 I{0<R~< oo} + Y* GR2 1{0 <R2< cx~} = y , (Z a �9 A l) + Y* (Z 2 �9 A2).

Therefore, taking dual optional projections,

(Y* e R 1~o < R< oo3) ~ = ~ * A 1 + Z 2 , (Y* A2) ~

Because A 2 is a RAF or (X, T), Y , A 2 is also a RAF of(X, T) and so ( y , A 2 ) ~ is an AF of (X, T), and clearly (y*A2) ~ is strongly dominated by (A2) ~ Hence there exists, by (4.5), f e d c~.~ g with (Y* A2) 0= Y* (A2) 0. We have now

(Y* eR 1{o <R< ~))0 = O(yz1 ) , A 1 .q_Z 2 ~ , (A2)O

= ~ �9 A 1 + ((Z 2 Y) * A2) ~

Page 13: Markov properties of a Markov process

Markov Properties of a Markov Process 325

But, with 0/0 defined to be 0,

~ * A 1 -~ 0(zX)

_ o ( y z 1) ~

_ ~

~ _ ~

~ and

, (~ A 1)

* (Z 1 * A1) 0

- - ,(1M,SRI{o<R<oo}) 0

1M * (8 R I{0<R< oo}) ~

E(z 2 Y) * A2-] ~ = y , (1M~ * e R I~O<R< 0o}) ~

= Y1Me * (e R I{0<R< oo}) ~ It follows that

[ O(YZ1) 1m + ~(1m~ ) * (8 R I{O<R< ce}) O. ( r * e~ t{o<R< ~)o = ~ ~

~ 1M+Y1Mce~Dc~.~g, (3.5) shows that R has the intermediate Because O(zl ~

Markov property.

(5.8) Remarks. (i) Since YeS5 g in the preceding proof, we have shown, in fact, that ~__<R and ~>-R- are conditionally independent given ~[R-,Rr

(ii) If in the statement of (5.7) one replaces S5 g by S5 e and assumes that A 2 is a left RAF of (X, T), then the argument shows that R has the Markov property. (This time one must suppose Y~ ~d in the argument.) Similarly if one replaces by ~ and assumes A 1 is predictable, then R has the left Markov property.

(iii) If R is either a stopping time or co-optional, then R satisfies the hypotheses of (5.7).

(iv) If R satisfies the hypotheses of (5.7) and if W~IR + • g2 is either in tD or ~g, then it is easily checked that the random time R' defined by [[R'~ = [[R]] c~ W also satisfies (5.7). In particular, if R is co-optional (resp. a stopping time) and W is optional (resp. in ~g), then R' has the intermediate Markov property.

We showed in [11] that reconstructable co-optional times (see (5.12) for the definition) have the Markov property. Here we give a slight extension of this result.

(5.9) Corollary. Let R be co-optional for (X, T) where T is a right terminal time. Suppose that there exists Y ~ with 0<_ y <_ 1 such that for each t>O

(5.10) l{R=t} = yo Otl{t<T}.

Then R has the Markov property.

Page 14: Markov properties of a Markov process

326 R.K. Ge toor and M.J. Sharpe

Proof. Since R<=T, it follows from (5.10) that R < T if 0 < T < o o . Therefore A t = l{o<R< ~} lfR, ~r(t) = 1{o <R< oo} 1ER, rL(t) and A is a RAF of (X, T). Define

B~= ~ YoOs=Ylfr>o}+ ~ YoO S s < t A T O<s<TAt

= Y I{T > 0} + 1]R, co[(t) 1~o < R < T}"

Then B, z ~ for each t > 0, B is a raw LAF of (X, T), and as random measures A = 1~o, oo~ �9 B. Applying (5.8(ii)) (take M empty so that MC=[[0, ooI[), one sees that R has the Markov property.

(5.11) Remark. If R is co-optional for (X, T) and [[R]] c~]]0, oo[[ is contained in the trace of S3 d on ]~0, T[, then (5.10) is satisfied.

(5.12) Remark. Recall from [11] that a co-optional time L is reconstructable provided there exists a decreasing sequence (Ln) of co-optional times such that ahnost surely LnSL and L~>L on { 0 < L < oo}. I f 0 < t < oo and coCf2, then L(co) = t if and only if t<L~(co) for all n and L(co)<t, or equivalently, L,(O, co)>O for all n and L(0~co)=0. Therefore letting A={L,>OVn, L=O} and T=o % it is immediate that (5.10) holds with Y = 1A. Hence L has the Markov property. This is an alternate proof of the main result of [11].

(5.13) Remark. Even if a co-optional time has the Markov property, it need not have the Markov property relative to the Chung-Doob a-field ~_<R+ described in w For example, if X is linear Brownian motion killed at _+1 and if R =sup { t< ( : X~=0}, then R is coterminal and consequently ([113, [16]) has the Markov property. However, Z_<R+ = (~ ~_<R+~ contains the set {Xr = 1, R < ( }

t > 0 so a Markov property cannot hold relative to ~ R + "

(5.14) Remark. As mentioned in the introduction, times of the form described in (5.7) may be thought of as a continuous parameter generalization of discrete parameter times R such that {R = n} =F~ c~ 0 2 ~ (G) with F~ z ~ and G e 3. Indeed, one may express this property of R in the form eRI{R<~=Z*A, where A

= ~ Fke k is optional and Z ,=IGo0, , is homogenous. The additional com- k=0

plication in the continuous parameter case comes from having random times whose graphs intersect the graph of an arbitrary stopping time in only an evanescent set.

6. Space-time Markov Properties

It sometimes happens that random times R have a Markov property in which the sense of the present is taken to be not just X R but R also. This happens in particular in the theory of excursions ([10], for example). Results of this type can be discussed in a convenient way by passing to a space-time process over X.

Let zt(r)=r+t for r~lR + and ~ I R +. Let ~ = I R + x(2 and define

2~(r, co) = (zt(r), X~(co)) = (r + t, Xt(co))

O,(r, co) = (r + t, O,(co)).

Page 15: Markov properties of a Markov process

Markov Properties of a Markov Process 327

For (r,x)~lR + x E let f f r ' x=erxPX on ~3 + x ~ ~ Then ( 2 , Or) under the mea- sures p,.,x is a convenient representation of the space-time process over X. See [18, w for a discussion of some of the technical points associated with this construction. In particular 3? is a right process. Note that ~~ x ~o and ~o =~3 + x ~o where ~o and ~o are the usual uncompleted a-algebras associated with the Markov process, 2 . Of course, ~ and ~, denote the usual completed a- algebras associated with 2 . Observe that if A ~ ~ (resp. ~,), then for each r c IR +, A"= {co: (r, co) ~ A} e ~ (resp. ~t).

For ease of manipulations, we shall assume throughout this section that f2 is the space of all right continuous maps of IR + into Ea, with A as cemetery. Let (co', t, co)--+co'/t/co denote the splicing map from f2 • IR + x (2 into (2 characterized by

X,(co'/t/co)=Xu(co' ) if u < t

= Xu_t(co ) if u > t .

It is easy to check that for each fixed co'~fL (t, co)-*co'/t/co is in ~3 + x ~o /~o and it follows, since the universal completion of s + x ~o is ~*, that (t, co)-*co'/t/co is in ~*/~*.

A process Z will be said to be adapted to the future if for every t > 0, Z t c flit. Recalling (2.5) this means that for every t > 0 there exists H t ~ * such that Z t =Hto 0 t identically on ~2. Note that if R is a random time then l~o.R ~ is adapted to the future if and only if {R > t} E fli t for every t > 0. Random times with this property were studied by Kallenberg [14] under the name backward times.

(6.1) Lemma. Let Z be a right continuous ~* measurable process on f2. For some f ixed coo ~ f2 define

(6.2) Z,t(r, co)=Zt+r(coo/r/co).

Then (r, co)-* Zt(r , co) is "~* measurable, t-* Zt(r, co) is right continuous, and Zt(O, co) =Zt(co ). If, in addition, Z is adapted to the future, then Z is perfectly homo- geneous on IR + relative to the shifts (Ot),>_o.

Proof. We observed above that (r, co)--*coo/r/co is in ~*/~*, and this implies that the map (r, co)-*(r, COo/r/co ) of ~ into IR + x f2 is in ~*/~3 + x ~*. On the other hand, the right continuity of Z t shows that for each fixed t>=0, the map (r, co)~Zt+~(co ) is in ~3 + x ~*/~3. By composition, it follows that the map

(r, co)--* Z,+~(coo/r/co ) =2~(r, co)

is ~* measurable. The right continuity of t~Z, ( r , co) is evident, as is the identity 2,(0, ~)=z,(co).

Now assume that Z is adapted to the future. Then Zt(co')=Z,(co" ) provided O, co'=Otco". Consequently for s, t > 0

2s(Ot(r , co)) = 2 , (r + t, Ot co) = Z s +~ +,(co o/r + t/O, co)

= Z,+ ~+ t(coo/r/co) = 2~+t(r, co)

because

O s +~ + ,(coo/r + t/O, co) = O s +tco = O s +~ +, (coo/r /~ ).

Page 16: Markov properties of a Markov process

328 R.K. Getoor and M.J. Sharpe

Therefore 2 s o 0f = Zs+ t identically.

(6.3) Corollary. Le t R be a backward time. Then for any f i x e d cooer2

(r, o ) = (R (co o/r/co) - r) +

is co-optional relative to the space-t ime process and/~(0 , co)=R(co).

Proof. Let Z = lgo, R~ and apply (6.1) to find that

Zt(r, co) = l[o,R(oao/r/oj)[( t + r) = lro,~(r ' ~o)r(t)

is perfectly homogeneous on IR + and 2rE ~ for each t>0 . It follows that /~ is co-optional for space-time.

Though it is not needed for the main results of this paper, it may be worth noting that the same technique yields an interesting fact about additive function- als.

(6.4) Proposition. Le t A t be a right continuous f in i te valued increasing process defined on f2 such that for all 0 < s < t, A r - A s ~ (fi s. Then

(6.5) fit(r, co) = A t + r(coo/r/co) - Ar(coo/r/co)

defines a perfect R A F o f f c

Proof. Since A t + s - A s ~ ( f s for each t>=O, Zs(co)=At+s(co)-As(co) satisfies the hypotheses of (6.1). Therefore

20 (r, co) = z~(coo/r/co) = At+ Acoo/r/co)- A~(coo/r/co) = 2t(r, co)

is ~-measurable. Moreover (6.1) states that 2 is perfectly homogeneous on IR +, so that Z,(r, co)=2o(0,(r, co)) identically. However,

2 , (r, co) = Z , +~ (co o/r/co)

= At + u +~ (co o/r/co) - A u +~ (co o ~r/co)

= [At+u+~(coo/r/co ) -A,(coo/r/co)] - [A,+~(coo/r/co ) - A,(coo/r/co) ]

= ~t+~(r, co ) -~u( r , co).

Hence, the homogeneity of Z yields

so A is a perfect RAF.

It is intuitively reasonable to expect that if the increasing process A satisfies the hypotheses of (6.4) and if A is adapted to (~t) (resp. predictable over (~t)) then the process A defined by (6.5) should be adapted to (~t) (resp., predictable over (~,)). There seems to be some technical difficulties involved at this level of generality, so we content ourselves with cases in which these problems largely disappear.

Page 17: Markov properties of a Markov process

Markov Properties of a Markov Process 329

(6.6) Proposition. Let A satisfy the hypotheses of (6.4), and suppose in addition that for all s>O and t >O there exists a random variable Hs, t ~ t such that As+ t - A s = H s , tO 0 t identically on f2. Then the process A defined by (6.5) is an AF of X.

Proof. Fix r > 0. Then

~Ar, co) = A,+ r(coo/r/co)- Ar(coo/r/co) = Hr,,(Or(coo/r/co))

=Hr, t(co).

By hypothesis, co~ft~(r, co) is ~, measurable for each fixed r > 0 and t>0 . For fixed coo, let co'=coo/t/co so that 0,co'=co. Then H~,~(co)=H~,t(Ot~')=A~+t(co') -A~(co'), shows that r~H,,~(co) is right continuous. Hence (r, co)--,A~(r, co) is ~+ • ~ c ~ measurable, proving (6.6).

In order to pass from results about objects defined over the spacetime process to objects defined over X alone we identify f2 with the section {0} x f2 of ~), recognizing then that px is to be identified with/5o.x. We shall abuse notation and use the same symbol (.~a, for example) for both the a-algebra defined on IR + • ~ and its restriction to IR + x f2. The following characterizations of the a- algebras s m ~ a, ~ c ~ g and s a on IR + x f2 then obtain, up to evanescence.

(6.7) (i) ~ c ~ a = { f ( t , X t ) : f e ~ e, the a-field on I R + x E generated by~-ex- cessive functions for 2} ;

(ii) ~ c~ ~g is generated by processes of the form f ( t , X,)_ with f ,-excessive for 2 .

(iii) s c~ 5 g is generated by s c~ B e and ~} c~ 5 g.

Suppose now that R is a backward time and let/~(r, co)=(R(coo/r/co)-r) + so that /~ is co-optional for 2 . One defines ~rRl, etc., on s to be the trace of ~t~, etc. on f2 c ~. From (6.7) one sees that

(6.8) (i) ~W1={f(R, XR) I{R<oo}: fete}, (ii) ~[R_l=a{f (R, XR)_ l{0<R<oo}: f e-excessive for X},

(iii) ~R -, R~ = ~ER-~ V ~tRl" The following result may then be read off from (5.2).

(6.9) Theorem. Let R be a backward time. Then given HEb~>=R there exists ffleb~ER_ 1 such that E'{HI~<R} : / q on { 0 < R < oo}.

(6.10) Remark. If X is a Hunt process having a transition density with a dual, then X7 is a Hunt process having an absolutely continuous resolvent with a dual in the sense of Chap. VI of [4]. In this case, if R is a backward time for X, ~R-1 = a ( R , X e ). See (5.6). When X is a L6vy process in IRa having an absolutely continuous transition density, (6.8) reduces to a result of Kallenberg [14].

References

1. Az6ma, J.: Une remarque sur les temps de retour: trois applications. Strasbourg Seminar VI, Springer Lecture Notes 258. Berlin-Heidelberg-New York: Springer 1972

Page 18: Markov properties of a Markov process

330 R.K. Getoor and M.J. Sharpe

2. Az6ma, J.: Th6orie g6n6rale des processus et retournement du temps. Ann. Sc. Ecole Norm. Sup 4 e s6rie, t. 6, 459-519 11973)

3. Benveniste, A., Jacod, J.: Systames de L6vy des processes de Markov. Invent. Math. 21, 183-198 (1973)

4. Blumenthal, R.M., Getoor, R.K.: Markov Processes and Potential Theory. New York: Aca- demic Press 1968

5. Chung, K.-L., Doob, J.L.: Fields, optionality and measurability. Amer. J. Math. 87, 397-424 (1965)

6. Cinlar, E., Jacod, J., Protter, P., Sharpe, M.: Semimartingales and Markov processes. [To appear in Z. Wahrscheinlichkeitstheorie verw. Gebiete]

7. Dellacherie, C., Meyer, P.A.: Probabilit6s et Pomntiel, Vol. I. Paris: Hermann 1975 8. Getoor, R.K.: Markov Processes: Ray Processes and Right Processes. Springer Lecture Notes.

440. Berlin-Heidelberg-New York: Springer 1975 9. Getoor, R.K.: On the construction of kernels. Strasbourg Seminar IX, 443-463. Springer Lecture

Notes 465. Berlin-Heidelberg- New York: Springer 1975 10. Getoor, R.K., Sharpe, M.J.: Last exit decompositions and distributions. Indiana Univ. Math. J.

23, 377-404 (1973) 11. Getoor, R.K., Sharpe, M.J.: The Markov property at co-optional times. Z. Wahrscheinlich-

keitstheorie verw. Gebiete 48, 201-211 (1979) 12. Jacobsen, M.: Markov chains: birth and death times with conditional independence. To appear 13. Jacobsen, M., Pitman, J.: Birth, death and conditioning of Markov chains. Ann. Probab. 5, 430-

450 (1977) 14. Kallenberg, O.: Path decomposition at backward times in regenerative sets. To appear 15. Meyer, P.A., Smythe, R., Walsh, J.B.: Birth and death of Markov processes. Proc. 6th Berkeley

Sympos. Math. Statist. Probab. VoI. III, 295-305, University of California Press (1972) 16. Pittenger, A.O., Shih, C.T.: Coterminal families and the strong Markov property. Trans. Amer.

Math. Soc. 182, 1-42 (1973) 17. Sharpe, M.J.: Killing times for Markov processes. To appear 18. Sharpe, MJ. : General Theory of Markov Processes. Forthcoming book 19. Walsh, J.B.: The perfection of multiplicative functionals. Strasbourg Seminar VI, 233-242,

Springer Lecture Notes 258. Berlin-Heidelberg-New York: Springer 1972 20. Weil, M.: Conditionnement par rapport an pass6 strict, Strasbourg Seminar V, 362-372, Springer

Lecture Notes 191. Berlin-Heidelberg-New York: Springer 1971

Received September 17, 1980