harmonic functions and exit boundary of superdiffusionebd/heb.pdf · harmonic functions and exit...

27
HARMONIC FUNCTIONS AND EXIT BOUNDARY OF SUPERDIFFUSION E. B. DYNKIN Abstract. All positive harmonic functions in an arbitrary domain of a Euclid- ean space can be described in terms of the so-called exit boundary. This was established in 1941 by R. S. Martin. A probabilistic approach to the Martin theory is due to Doob and Hunt. It was extended later to harmonic functions associated with a wide class of Markov processes. The subject of this paper are harmonic functions associated with a superdiffusion X (we call them X- harmonic). The results of Evans and Perkins imply the existence and unique- ness of an integral representation of positive X-harmonic functions through extreme functions. An outstanding problem is to find all extremal functions (they are in 1-1 correspondence with the points of the exit boundary). An interest in X-harmonic functions is motivated, in part, by the fact that each of them provides a way of conditioning a superprocess. Path properties of conditioned superprocesses (corresponding to various special X-harmonic functions) were investigated by a number of authors. Important classes of X-harmonic functions are related to positive solutions of semilinear partial differential equations. Almost nothing is known about their decomposition into extreme elements. A progress in this direction may create new tools for investigating solutions of the equations. The goal of this paper is to summarize all known facts about X-harmonic functions, to present the results of various authors in a more general setting in a unified form and to outline a program of further work. 1. Introduction 1.1. Suppose that L is a second order uniformly elliptic operator in a domain E of R d . An L-diffusion is a continuous strong Markov process ξ =(ξ t , Π x ) in E with the generator L. A function h in a domain E is called ξ -harmonic (or L-harmonic) if, for every domain D E, 1 Π x h(ξ τD )= h(x) for all x D. Here τ D is the first exit time of ξ from D. This condition is satisfied if and only if Lh = 0 in E. Let ψ be a function from E × R + to R + where R + = [0, ). An (L,ψ)- superdiffusion is a model of an evolution of a random cloud. It is described by a family of random measures (X D ,P μ ) where D E and μ is a finite measure on E. If μ is concentrated on D, then X D is concentrated on ∂D. We call X D the exit measure from D. Heuristically, it describes the mass distribution on an absorbing barrier placed on ∂D. We put μ ∈M c (D) if μ is a finite measure concentrated on a compact subset of D. We say that a function H : M c (E) R + is X -harmonic and we write Partially supported by National Science Foundation Grant DMS-0204237. 1 We write DE if D is a bounded domain such that the closure ¯ D of D is contained in E. 1

Upload: trinhminh

Post on 21-Jun-2018

225 views

Category:

Documents


0 download

TRANSCRIPT

HARMONIC FUNCTIONS AND EXIT BOUNDARY OFSUPERDIFFUSION

E. B. DYNKIN

Abstract. All positive harmonic functions in an arbitrary domain of a Euclid-ean space can be described in terms of the so-called exit boundary. This wasestablished in 1941 by R. S. Martin. A probabilistic approach to the Martintheory is due to Doob and Hunt. It was extended later to harmonic functionsassociated with a wide class of Markov processes. The subject of this paperare harmonic functions associated with a superdiffusion X (we call them X-harmonic). The results of Evans and Perkins imply the existence and unique-ness of an integral representation of positive X-harmonic functions throughextreme functions. An outstanding problem is to find all extremal functions(they are in 1-1 correspondence with the points of the exit boundary).

An interest in X-harmonic functions is motivated, in part, by the fact thateach of them provides a way of conditioning a superprocess. Path propertiesof conditioned superprocesses (corresponding to various special X-harmonicfunctions) were investigated by a number of authors.

Important classes of X-harmonic functions are related to positive solutionsof semilinear partial differential equations. Almost nothing is known abouttheir decomposition into extreme elements. A progress in this direction maycreate new tools for investigating solutions of the equations.

The goal of this paper is to summarize all known facts about X-harmonicfunctions, to present the results of various authors in a more general settingin a unified form and to outline a program of further work.

1. Introduction

1.1. Suppose that L is a second order uniformly elliptic operator in a domain Eof Rd. An L-diffusion is a continuous strong Markov process ξ = (ξt,Πx) in E withthe generator L. A function h in a domain E is called ξ-harmonic (or L-harmonic)if, for every domain D b E, 1

Πxh(ξτD ) = h(x) for all x ∈ D.

Here τD is the first exit time of ξ from D. This condition is satisfied if and only ifLh = 0 in E.

Let ψ be a function from E × R+ to R+ where R+ = [0,∞). An (L,ψ)-superdiffusion is a model of an evolution of a random cloud. It is described bya family of random measures (XD, Pµ) where D ⊂ E and µ is a finite measure onE. If µ is concentrated on D, then XD is concentrated on ∂D. We call XD the exitmeasure from D. Heuristically, it describes the mass distribution on an absorbingbarrier placed on ∂D.

We put µ ∈ Mc(D) if µ is a finite measure concentrated on a compact subsetof D. We say that a function H : Mc(E) → R+ is X-harmonic and we write

Partially supported by National Science Foundation Grant DMS-0204237.1We write D b E if D is a bounded domain such that the closure D of D is contained in E.

1

2 Harmonic functions and exit boundaries

H ∈ H(X) if, for every D b E and every µ ∈ Mc(D),

PµH(XD) = H(µ).(1.1)

Fix c ∈ E and denote by H(X, c) the class of all positive X-harmonic functionsH such that H(δc) = 1 [δc is the unit mass concentrated at c]. An element H ofH(X, c) is called extreme if the conditions H ≤ H, H ∈ H(X, c) imply that H = H .Let He(X, c) stand for the set of all extreme elements. The formula

H(x) =∫

He(X,c)

H(x)µ(dH)

establishes a 1-1 correspondence between H ∈ H(X, c) and probability measures µon He(X, c). This result is due (in a slightly different setting) to Evans and Perkins[EP91]. [A similar result for ξ-harmonic functions is known for a long time.]

The exit boundary Γ of X can be defined as the set He(X, c) of all extremeelements of H(X, c). A version of this definition independent of a choice of c willbe given in section 6.2.

1.2. The class of superdiffusions under our consideration is specified in section2. In section 3 we introduce (following [Dyn78]) a general concept of convex mea-surable space. We call such a space a simplex if every element has a unique de-composition into extreme elements. Then we consider a Markov chain Xn withdiscrete time parameter and with a transition function p(r, x; t, B). The corre-sponding exit laws form a convex measurable space, and we state a condition (see3.3.A) under which the set of normalized exit laws is a simplex. An investigation ofX-harmonic functions can be reduced to a study of exit laws of a discrete Markovchain XD1 , . . . , XDn , . . . where Dn is a sequence of domains exhausting E. 2

The next task is to prove that the condition 3.3.A holds for the chain XDn .First, we prepare necessary tools: a diagram description of the moments of exitmeasures in section 4 and the Poisson representation of infinitely divisible randommeasures in section 5. By using these tools we prove in section 6 the followingabsolute continuity theorem (which is also of independent interest): the class ofsets C such that PµXD ∈ C = 0 is the same for all µ ∈ Mc(D). 3 The condition3.3.A for XDn follows immediately from this theorem.

In section 7 we investigate classes of X-harmonic functions related to positivesolutions of the equation Lu = ψ(u) in E. Here we rely on the work of Salisburyand Verzani [SaV99],[SaV00]. (Since they used Le Gall’s Brownian snake, they wererestricted to the case ψ(u) = u2/2.)

The simplest example of an X-harmonic function is the total mass H(µ) = 〈1, µ〉and this is the only function proved to be extreme (in the case ψ(u) = u2/2). Theproof, due to Evans [Eva93], is based on investigating the tail σ-algebra for theH-transform. We present his proof (with some simplifications) in section 8.

2. Superdiffusion

2.1. Let E be a subset of Rd. We denote by M(E) the set of all finite measureson E and by B+(E) the class of all positive Borel functions on E.

2This means Dn ↑ E and Dn b Dn+1 for all n.3Mselati [Ms 02] proved independently a similar result by a different method in the case L =

∆, α = 2. (He used it to establish that a positive solution of the equation ∆u = u2 in a boundedsmooth domain is uniquely determined by its fine boundary trace.)

E. B. Dynkin 3

The Green operator GD and the Poisson operator KD of a diffusion ξ in an openset D are defined by the formulae

GDf(x) = Πx

∫ τD

0

f(ξs) ds,

KDf(x) = Πx1τD<∞f(ξτD).(2.1)

Let ψ be a function from E × R+ to R+. Suppose that to every open set D ⊂ Eend every µ ∈ M(E) there corresponds a random measure (XD , Pµ) on Rd suchthat, for every f ∈ B+,

Pµe−〈f,XD〉 = e−〈VD(f),µ〉(2.2)

where u = VD(f) satisfies the equation 4

u+GDψ(u) = KDf.(2.3)

We call the family X = (XD , Pµ) an (L,ψ)-superdiffusion and we call VD thetransition operators of X .

The existence of a (L,ψ)-superdiffusion is proved for

ψ(x;u) = b(x)u2 +∫ ∞

0

(e−tu − 1 + tu)N(x; dt)(2.4)

under broad conditions on a positive Borel function b(x) and a kernel N from E toR+. It is sufficient to assume that:

2.1.A.

b(x),∫ ∞

1

tN(x; dt) and∫ 1

0

t2N(x; dt) are bounded.(2.5)

(see,e.g., [Dyn02],Theorem 4.2.1).We also use the following condition which does not follow from 2.1.A if n > 1:

2.1.Bn.∫ ∞

1

tnN(x, dt) is bounded(2.6)

We say that N satisfies the condition 2.1.B∞ if 2.1.Bn holds for all n.Under the conditions 2.1.A–2.1.Bn, the derivatives ψr(x, u) = ∂rψ(x,u)

∂ur exist forr ≤ n. Moreover,

ψ1(x, u) = 2bu+∫ ∞

0

t(1 − e−tu)N(x, dt),

ψ2(x, u) = 2b+∫ ∞

0

t2e−tuN(x, dt),

(−1)rψr(x, u) =∫ ∞

0

tre−tuN(x, dt) for 2 < r ≤ n

(2.7)

are positive and continuous in u. For every 1 < r ≤ n, ψr is bounded and ψ1 isbounded on each set E × [0, c].

4ψ(u) is a short writing for ψ(x, u(x)).

4 Harmonic functions and exit boundaries

2.2. Denote by M the σ-algebra in M(E) generated by the functions F (µ) = 〈f, µ〉with f ∈ B+ and let F stand for the σ-algebra in Ω generated by XD. It followsfrom (2.2) that H(µ) = PµY is M-measurable for every F-measurable Y ≥ 0. Weuse the following Markov property of X : 5 Suppose that Y ≥ 0 is measurable withrespect to the σ-algebra F⊂D generated by XD′ , D′ ⊂ D and Z ≥ 0 is measurablewith respect to the σ-algebra F⊃D generated by XD′′ , D′′ ⊃ D. Then

Pµ(Y Z) = Pµ(Y PXDZ).(2.8)

2.3. Recall that a function H : Mc(E) → R+ is X-harmonic if, for every D b Eand every µ ∈ Mc(D),

PµH(XD) = H(µ).(2.9)

To every positive L-harmonic function h there corresponds an X-harmonic function

H(µ) = 〈h, µ〉.(2.10)

[The total mass 〈1, µ〉 belongs to this class.] Another important class ofX-harmonicfunctions is defined in terms of boundary functionals of X that is functions Y (ω)which are measurable with respect to F⊃D for every D b E. [For instance, Φ(XE)is a boundary functional for every M-measurable function Φ.] It follows from (2.8)that if Y ≥ 0 is a boundary functional and if H(µ) = PµY is finite, then H is anX-harmonic function.

3. Exit boundary

3.1. Convex measurable spaces. Let (Z,BZ) be a measurable space. We saythat a convex structure is introduced into Z if a point zµ (called the barycentreof µ) is associated with every probability measure µ on BZ . A space (Z,BZ) withsuch a convex structure is called a convex measurable space.

We say that z is an extreme point of Z and we write z ∈ Ze if z is not thebarycentre of a measure µ unless µ is concentrated at z. A convex measurablespace Z is called a simplex if Ze is measurable and each z ∈ Z is a barycentre ofone and only one probability measure concentrated on Ze.

Let (Z,BZ) and (Z ′,BZ′) be convex measurable spaces and let j be a map fromZ to Z ′. We say that j preserves the convex structure if j is measurable and if ittransforms the barycentre of measure µ into the barycentre of the measure

µ′(B) = µ(j−1(B)), B ∈ BZ′ .

We say that j is an isomorphism if it is invertible and j and j−1 preserve convexstructure.

Suppose that Z is a collection of functions from a set S to [0,∞]. Denoteby BZ the minimal σ-algebra in Z which contains sets z : z(s) < a for alls ∈ S, a ∈ (0,∞). We define the barycentre of µ by the formula

zµ =∫

S

z(s)µ(ds).

If zµ ∈ Z for every probability measure µ, then Z is a convex measurable space.

5See [Dyn93], Theorem I.1.3 or [Dyn02], 3.1.3.D.

E. B. Dynkin 5

3.2. Markov chains. Suppose (En,Bn), n = 0, 1, 2, . . . is a sequence of measur-able spaces. A Markov transition function is a family of kernels p(r, x;n,B), 0 ≤r < n from (Er,Br) to (En,Bn) such that

p(r, x;n,En) = 1 for all r < n, x ∈ Er

and∫

Ek

p(r, x; k, dy)p(k, y;n,B) = p(r, x : n,B)

for all r < k < n and all x ∈ Er, B ∈ Bn.A sequence ω = xr, xr+1, . . . where xn ∈ En is called a path with the birth

time r. We put α(ω) = r,Xn(ω) = xn for n ≥ r. For n ≤ r we put Xn(ω) = [where [ is a “pre-birth state” (not in any En). Consider the space Ω of all paths anddenote by F≤r [F≥r] the σ-algebra in Ω generated by Xn(ω) ∈ Bn with Bn ∈ Bnand n ≤ r [n ≥ r]. To every x ∈ Er there corresponds a probability measure Pr,xon F≥r such that

Pr,xα = r,Xr = r,Xr+1 ∈ Br+1, . . . , Xn ∈ Bn

=∫

Br+1

· · ·∫

Bn

p(r, x; r + 1, dyr+1) . . . p(n− 1, yn−1;n, dyn)

for all n ≥ r,Br+1 ∈ Br+1, . . . , Bn ∈ Bn. This implies:

Pr,xZ|F≤n = Pn,XnZ Pr,x-a.s.(3.1)

for r ≤ n, Z ∈ F≥n. The family (Xn, Pr,x) is called a Markov chain with thetransition function p.

3.3. Exit laws. A sequence of positive measurable functions Fn(x), x ∈ En iscalled a p-exit law if

En

p(m,x;n, dy)Fn(y) = Fm(x) for all m < n, x ∈ Er.

We denote E(p) the set of all p-exit laws and we put F ∈ E(p, c) if F ∈ E(p) andF 0(c) = 1.

Suppose that p satisfies the condition:

3.3.A. If p(0, c;n,B) = 0, then p(m,x;n,B) = 0 for all x and all m < n.

It is proved in [Dyn78], Section 10.2 that, under the condition 3.3.A, the setE(p, c) is a simplex.

3.4. F -transform. Suppose that F ∈ E(p, c). Formula

pF (m,x;n,B) =1

Fm(x)

B

p(m,x;n, dy)Fn(y)(3.2)

defines a transition function in En = En ∩ Fn > 0. The corresponding Markovchain (Xn,PFr,x) is called the F -transform of (Xn,Pr,x).

6 Harmonic functions and exit boundaries

3.5. Tail σ-algebra. The intersection FT of F≥r over all r is called the tail σ-algebra. If Z ∈ FT , then (3.1) holds for measures PFr,x and therefore

Z = limn→∞

PFn,XnZ PFr,x-a.s.(3.3)

An exit law F ∈ E(p, c) is extreme if and only if PFr,x(C) = 0 or 1 for all C ∈ FT . 6

3.6. Markov chains associated with diffusions and superdiffusions. To con-struct such chains we fix a sequence Dn exhausting E and we put

En = E \Dn,Mn = M(En); ξn = ξDn , Xn = XDn .

Note that ξn is a Markov chain with the transition function

p(r, x;n,B) = Πx(ξn ∈ B), x ∈ Er, B ⊂ En.(3.4)

We call it the chain associated with the diffusion (ξt,Πx).The Markov property (2.8) of a superdiffusion implies that Xn is a Markov chain

with the transition function

P(r, µ;n,B) = Pµ(Xn ∈ B), µ ∈ Mr, B ⊂ Mn.(3.5)

We call it the chain associated with the superdiffusion (XD, Pµ).The tail σ-algebra for the chain Xn coincides with the intersection of F⊃D over

all D b E and therefore it does not depend on the choice of Dn. (The tail σ-algebrafor ξn for ξn also does not depend on this choice.)

Consider a Markov chain (Xn,Pr,µ) associated with a superdiffusion (XD, Pµ).If H is X-harmonic and if Fn is the restriction of H to Mn, then F is a P-exit law.If H ∈ H(X, c) and if c ∈ D1, then F ∈ E(P , δc). This way we define a mappingj : H(X, c) → E(P , δc). On the other hand, if m < n, then, by the Markov property(3.1), PµFn(Xn) = PµPXmF

n(Xn) = PµFm(Xm) and therefore, if F ∈ E(P , δc),

then PµFn(Xn) does not depend on n. We define H = i(F ) by the formula

H(µ) = PµFn(Xn).

Every D b E is contained in Dn for sufficiently large n, and, by the Markovproperty,

PµH(XD) = PµPXDFn(Xn) = PµF

n(Xn) = H(µ).

Hence, H ∈ H(X, c) and we have a map i : E(P , δc) → H(X, c). Clearly, i is theinverse for j and both mappings preserve the convex structure. It follows from theabsolute continuity theorem (to be proved in section 6) that P satisfies the condition3.3.A and therefore the class E(P , δc) is a simplex. Since H(p, c) is isomorphic toE(P , δc), we get:

Theorem 3.1. The set H(X, c) is a simplex.

Remark 3.1. Suppose that F = j(H). Since H is extreme if and only if F isextreme, we conclude that H is an extreme element of H(X, c) if and only ifPFr,µ(C) = 0 or 1 for all elements C of the tail σ- algebra.

6See, for instance, [Dyn02], section 7.5.

E. B. Dynkin 7

By the definition of a simplex, every H ∈ H(X, c) is the barycentre of one andonly one probability measure m on the set Γc = He(X, c) of all extreme elementsof H(X, c). This can be expressed by the formula

H(µ) =∫

Γc

Kγ(µ)m(dγ)(3.6)

where Kγ is an extreme element of H(X, c) corresponding to γ ∈ Γ.We call the set Γ of all extreme points of H(X, c) the exit boundary of X . (A

definition independent of c will be given in section 6.2.)

4. Moment formula

The results of this section will be used, both, for proving the absolute continuitytheorem and for studying X-harmonic functions related to solutions of semilinearequations. First, we prove a recursive formula for the moments. This is a mod-ification of a formula established by Salisbury and Verzani [SaV99] for the caseψ(u) = u2/2 (in terms of a Brownian snake). We deduce from the recursive for-mula a diagram description of the moments. This is a generalization of the diagrammethod developed in [Dyn88] and [Dyn91b]

4.1. Recursive moment formula. Denote by |C| the cardinality of a finite set Cand denote by Pr(C) the set of all partitions of C into r disjoint nonempty subsetsC1, . . . , Cr. Let P(C) stand for the union of P1(C), . . . ,P|C|(C).

Let u be a positive locally bounded function on E and let D b E. We consideran L-diffusion ξu in E with the killing rate u = ψ′(VD(u)). Green’s and Poisson’soperators in D of the process ξu are given by the formulae

GuDv(x) = Πx

∫ τD

0

exp−

∫ t

0

u(ξs) dsv(ξt) dt(4.1)

and

KuDv(x) = Πx exp

∫ τD

0

u(ξs) dsv(ξτD ).(4.2)

Theorem 4.1. Suppose that X is a (L, ξ)-superdiffusion in E with ψ of the form(2.4) subject to conditions 2.1.A–2.1.Bn . Let D b E. If u, v1, . . . , vn ∈ B+(D) andif u is bounded, then, for every µ ∈ Mc(D),

Pµe−〈u,XD〉

i∈C〈vi, XD〉 = e−〈VD(u),µ〉

P(C)

〈zC1 , µ〉 . . . 〈zCr , µ〉(4.3)

where zC are defined recursively by the formulazC = Ku

Dvi for C = i,

zC = GuD[∑

r≥2

qr∑

Pr(C)

zC1 . . . zCr ] for |C| > 1(4.4)

with

q1 = 1, qr(x) = (−1)rψr(x, VD(u)(x)) for r ≥ 2.(4.5)

Formula (2.1) can be considered as a particular case of (4.3). Other special cases

Pµ〈v,XD〉 = 〈KDv, µ〉,Pµ〈v1, XD〉〈v2, XD〉 =〈GD [q2(KDv1)(KDv2)], µ〉 + 〈KDv1, µ〉〈KDv2, µ〉

(4.6)

8 Harmonic functions and exit boundaries

Pµe−〈u,XD〉〈v,XD〉 = e−〈VD(u),µ〉〈Ku

Dv, µ〉(4.7)

were widely used by many authors. The expression (4.3) with u = 0 was obtained,first, in [Dyn88].

Proof. 1. If u is bounded, then VD(u) is bounded because VD(u) ≤ KDu by (2.3),and u is bounded by 2.1.A–2.1.Bn and (2.7).

Let C = 1, . . . , n. It is sufficient to prove the theorem for

u ≥ α > 0, vi ≤ β <∞ on D.(4.8)

Indeed, suppose that (4.3)–(4.4) hold for u` = u ∨ 1` and v`i = vi ∧ `. Note that u`

are uniformly bounded and converge to u. By passing to the limit as ` → ∞ andby using the monotone convergence and the dominated convergence theorems, weestablish that these relations hold for u and vi. [We use that ΠxτD <∞ = 1 fora bounded domain D and that, by 2.1.Bn, ψr(VD(u`)) → ψr(VD(u)) for r ≤ n.]

2. We consider functions fλ(x) = f(λ, x) where λ = (λ1, . . . , λn) with 0 ≤ λi ≤ 1and x ∈ D. Put

λ` = λ`11 . . . λ`nn , |λ| = λ1 + · · · + λn, |`| = `1 + · · · + `n

for ` = (`1, . . . , `n) and let

Dλi =∂

∂λi, D`

λ =∏

i

D`iλi.

Denote by An the set of functions f(λ, x) on [0, 1]n× D such that D`λf is bounded

for |`| ≤ n.Consider functions

yλ(x) = u(x) + λ1v1(x) + · · · + λnvn(x), ϕ(λ, x) = Pxe−〈yλ,XD〉.

It follows from (2.2) that Pµe−〈yλ,XD〉 = 〈ϕλ, µ〉 for every µ ∈ Mc(D).Under the condition (4.8),

e−〈yλ,XD〉n∏

1

〈vi, XD〉`i ≤ β|`|e−α〈1,XD〉〈1, XD〉|`|(4.9)

is bounded and therefore

(−1)|`|〈D`λϕλ, µ〉 = Pµe

−〈yλ,XD〉n∏

1

〈vi, XD〉`i .(4.10)

It follows from (4.9) that ϕ ∈ An for every n. By (2.2),

〈ϕλ, µ〉 = e−〈zλ,µ〉(4.11)

where

zλ = VD(yλ).(4.12)

Since zλ is bounded on D, inf ϕλ on D is strictly positive. Therefore z = − logϕbelongs to class An.

3. We write f ∼ 0 if

f(λ, x) =n∑

1

λ2iQi(λ, x) + |λ|nε(λ, x)

E. B. Dynkin 9

where Qi are polynomials in λ with coefficients that are bounded Borel functionsin x and ε(λ, x) is a bounded function tending to 0 as λ → 0. We write f1 ∼ f2 iff1 − f2 ∼ 0.

For every subset B of C = (1, . . . , n), denote by λB the product of λi, i ∈ B andput fB = DB

λ f evaluated at λ = 0 where DBλ is the product of Dλi over i ∈ B. It

follows from Taylor’s formula, that, if f ∈ An, then

f(λ, x) ∼ f(0, x) +∑

B

λBfB(x)

where B runs over all nonempty subsets of C.4. Since ϕ and z belong to An,

ϕ ∼ ϕ0 +∑

B

(−1)|B|λBϕB(4.13)

and

z ∼ z0 +∑

B

(−1)|B|−1λBzB(4.14)

with bounded ϕB and zB. Moreover, (−1)|B|ϕB is equal to DBλ ϕλ at λ = 0. Note

that DBλ 〈ϕλ, µ〉 = 〈DB

λ ϕλ, µ〉 and, by (4.10),

(−1)|B|〈ϕB , µ〉 = Pµe−〈u,XD〉

i∈B〈vi, XD〉.(4.15)

For every constant c, ecλB ∼ 1 + cλB . Therefore (4.11) and (4.14) imply

〈ϕλ, µ〉 ∼ e−〈z0,µ〉∏

B

[1 + (−1)|B|λB〈zB , µ〉].

The coefficient at λC in this product is equal to

(−1)|C|e−<z0,µ〉∑

P(C)

〈zC1 , µ〉 . . . 〈zCr , µ〉.

On the other hand, by (4.13), this coefficient is equal to (−1)|C|〈ϕC , µ〉. Sincez0 = VD(U), formula (4.3) follows from (4.15).

5. By Taylor’s formula and by (2.4) and (2.7),

ψ(zλ) = ψ(z0) + ψ1(z0)(zλ − z0) +n∑

r=2

1r!ψr(z0)(zλ − z0)r +Rλ(zλ − z0)n(4.16)

where

Rλ(x) =1n!

∫ ∞

0

tn(e−tz − e−tz0)N(x, dt)

with z on the line segment connecting z0 and zλ. By (4.14),

(zλ − z0)r ∼∑

B1,...,Br

r∏

1

(−1)|Bi|−1λBizBi = r!∑

B

(−1)|B|−rλB∑

Pr(B)

zB1 . . . zBr .

(4.17)

10 Harmonic functions and exit boundaries

By (4.16) and (4.17),

(4.18) ψ(zλ) ∼ ψ(z0) + u

n∑

1

λizi + u∑

|B|≥2

(−1)|B|−1λBzB

+∑

r≥2

qr∑

B

λB∑

Pr(B)

(−1)|B|zB1 . . . zBr .

By (2.3) and (4.12), zλ + GDψ(zλ) = KDyλ. By using (4.16) and (4.18) and bycomparing the coefficients at λB , we get

zi +GD(uzi) = KDvi for i = 1, . . . , n(4.19)

and

zB +GD(uzB) =∑

r≥2

GD[qr∑

Pr(B)

zB1 . . . zBr ] for |B| ≥ 2.(4.20)

It follows from Theorem 13.16 in [Dyn65] 7 that

z(x) = KuDv(x)

is a unique solution of the integral equation

z +GD(uz) = KDv

and

f = GuDρ

is a unique solution of the equation

f +GD(uf) = GDu.

Therefore the equations (4.19) and (4.20) imply (4.4)

4.2. Diagrams. We deduce from Theorem 4.1 a description of moments in termsof a certain class of labeled directed graphs.

We consider a directed graph F with the set of arrows A and the set of sites S.We write a : s → s′ if the arrow a begins at s and ends at s′. For every s ∈ S,we denote by a+(s) the number of arrows ending at s and by a−(s) the number ofarrows beginning at s. We assume that S = S− ∪ S0 ∪ S+ where

S− = s : a+(s) = 0, a−(s) = 1,S0 = s : a+(s) = 1, a−(s) > 1,S+ = s : a+(s) = 1, a−(s) = 0.

(4.21)

We call elements of S− entrances and elements of S+ exits.We say that a′ dominates a (and a is dominated by a′) if there exist arrows

a0, a1, . . . , ak such that a = a0, a′ = ak and the end of ai−1 coincides with the

beginning of ai for i = 1, . . . , k. We say that a is an entrance arrow if it does notdominate any other arrow and that a is an exit arrow if it is not dominated by anyother arrow.

Suppose that a linear operator L(a) in B+(E) is associated with every arrow a ofgraph F , that the exit set S+ is ordered and that a function qr ∈ B+(E) is given forr = 1, 2, . . . . A combination Λ = F, an order in S+,L, qr is called a diagram Λwith the frame F , the labels L and the multiplicities q.

7Cf. Theorem 3.1, Chapter 6 in [Dyn02].

E. B. Dynkin 11

Suppose that a replica Bs+(E) of the set B+(E) is attached to every site s ∈ S.We interpret L(a) for a : s→ s′ as a map from Bs′+(E) to Bs+(E) (an action oppositeto the direction of a!). We mark sites s by elements `(s) ∈ Bs+(E) in such a waythat, if a−(s) = r and if ai : s → si, i = 1, . . . , r are the arrows starting from s,then

`(s) = qr

r∏

1

L(ai)`(si).(4.22)

If (s1, . . . , sn) is the ordered set S+ and if `(si) = vi, then we call v = (v1, . . . , vn)the input of Λ. Clearly, it determines the function `, in particular, it determinesthe restriction of ` to the entrance set S− which we call the output of Λ. We writew = LΛ(v) if v is the input and w is the output of Λ.

The connected components F1, . . . , Fr of F are rooted trees (the root of Fi is itssingle entrance arrow.) Consider the corresponding partition of the exit set C of Finto exit sets Ci of Fi. We say that two orders in C are equivalent if they inducethe same partition of C. We do not distinguish diagrams obtained from each otherby replacing the order of the exit set by any equivalent order. (The outputs ofindistinguishable diagrams coincide up to the order in S− which we do not need tospecify.)

If the number of arrows in F is bigger than 1, then, by removing from F thesingle entrance arrow a : s → s′, we get r ≥ 2 connected frames Fi with the exitsets C1, . . . , Cr which form a partition of the exit set C of F . We write F : a →F1, . . . , Fr.

For every function f from a finite set C to B+(E) and for every measure µ ∈M(E), we put

〈f, µ〉 =∏

s∈C〈f(s), µ〉.

For every class K of diagrams we denote by K(F, v;µ) the sum of 〈LΛ(v), µ〉 overall Λ ∈ K with the frame F and input v.

4.3. Diagram description of moments. Let D b E. Denote by K(u,D) theclass of diagrams with multiplicities qr given by (4.5), with the exit arrows labeledby Ku

D and the rest of arrows labeled by GuD (GuD and KuD are defined by (4.1)

and (4.2)). We use the name (u,D)-diagrams for diagrams of class K(u,D). Notethat Λ is a (u,D)-diagram if and only if all its connected components are (u,D)-diagrams. If F : a→ F1, . . . , Fr, then Λ is a (u,D)-diagram with frame F if andonly if the restrictions of Λ to all sets Fi are (u,D)-diagrams. By (4.22),

K(F, v;µ) = GuD [qr∑

Pr(C)

K(F1, vC1 ;µ) . . .K(Fr , vCr ;µ)].(4.23)

Theorem 4.2. Under the conditions of Theorem 4.1, for every µ ∈ M(D),

Pµe−〈u,XD〉〈v1, XD〉 . . . 〈vn, XD〉 = e−〈VD(u),µ〉

Λ

〈LΛ(v), µ〉(4.24)

where Λ runs over all distinguishable u,D-diagrams with the ordered exit set(s1, . . . , sn).

12 Harmonic functions and exit boundaries

Proof. We get (4.24) from (4.3) if we demonstrate that∑

P(C)

〈zC1 , µ〉 . . . 〈zCr , µ〉 =∑

Λ

〈LΛ(v), µ〉.(4.25)

The right side is equal to the sum of K(F, v, µ) over all frames F with the exit sets1, . . . , sn. We split this sum into the parts corresponding to each partition ofF into connected components F1, . . . , Fr which are in 1-1 correspondence with theelements of P(C). To prove (4.25) it is sufficient to note that zCi = K(Fi, vCi , µ)satisfy the condition (4.4). The first part of (4.4) holds because a diagram Λ with asingle exit i consists of a single arrow a : s→ s′ and the output of Λ correspondingto the input vi is equal to Ku

Dvi. The second part of (4.4) follows from (4.23).

5. Infinitely divisible random measures

Another tool needed to get the absolute continuity theorem is the Poisson rep-resentation of exit measures.

5.1. Laplace functionals of infinitely divisible measures. Suppose that (E,B)is a measurable space and let M = M(E) be the space of all finite measures on E.We denote by BM the σ-algebra in M generated by the functions F (ν) = ν(B), B ∈B. We say that (X,P ) is a random measure on E if X is a measurable mappingfrom (Ω,F , P ) to (M,BM) and P is a probability measure on (Ω,F). We saythat (X,P ) is infinitely divisible if, for every n, there exist independent identicallydistributed random measures (X1, P ), . . . , (Xn, P ) such that X1 + · · ·+Xn has thesame probability distribution as X .

The probability distribution P of (X,P ) is a measure on (M,BM) defined by theformula P(C) = PX ∈ C. It is determined uniquely by the Laplace functional

LX(f) = Pe−〈f,X〉 =∫

Me−〈f,ν〉P(dν), f ∈ B+.(5.1)

[B+ stands for the class of all positive B-measurable functions.] X is infinitelydivisible if and only if, for every n, there exists a random measure (Y, P ) suchthat LX = LnY . It follows from (2.2) that this condition holds for exit measures(XD, Pµ). Hence these measures are infinitely divisible. It is clear that a randommeasure is infinitely divisible if its Laplace functional has the form

LX(f) = exp[−〈f, γ〉 −

M(1 − e−〈f,ν〉)R(dν)

](5.2)

where γ is a measure on S and R is a measure on M. Suppose (E,B) is a Luzinspace. 8 Then the Laplace functional of an arbitrary infinitely divisible randommeasure X has the form (5.2) (see, e.g., [Kal77] or [Daw93]). We can assume thatR0 = 0.

It follows from (5.1) and (5.2) that, for every constant λ > 0,

λ〈1, γ〉 +∫

M(1 − e−λ〈1,ν〉)R(dν) ≤ − logPe−λ〈1,X〉.

The right side tends to − logPX = 0 as λ → ∞. We arrive at the followingresult.

8That is there exists a 1-1 mapping from E onto a Borel subset E of a compact metric spacesuch that sets of B corresponds to Borel subsets of E.

E. B. Dynkin 13

Theorem 5.1. If a random measure (X,P ) on a Luzin space E satisfies the con-dition

PX = 0 > 0,(5.3)

then there exists a finite measure R on E such that:(a) R0 = 0;(b) for every f ∈ B+,

Pe−〈f,X〉 = exp[−

M(1 − e−〈f,ν〉)R(dν)

].(5.4)

By using the multiplicative systems theorem, it is easy to prove that R is de-termined uniquely by conditions (a)–(b). We call R the canonical measure for(X,P ).

5.2. Poisson random measure.

Theorem 5.2. Suppose that R is a finite measure on a measurable space (S,B).Then there exists a random measure (Y,Q) on S with the properties:

(a) Y (B1), . . . , Y (Bn) are independent for disjoint B1, . . . , Bn;(b) Y (B) is a Poisson random variable with the mean R(B), i.e.,

QY (B) = n =1n!R(B)ne−R(B) for n = 0, 1, 2, . . . .

The Laplace functional of (Y,Q) has an expression

LY (F ) = exp[−∫

S

(1 − e−F (z))R(dz)] for F ∈ B+.(5.5)

Proof. Consider independent identically distributed random elements Z1, . . . , Zn, . . .of S with the probability distribution R(B) = R(B)/R(S). Let N be the Poissonrandom variable with mean value R(S) independent of Z1, . . . , Zn, . . . . Put Y (B) =1B(Z1) + · · · + 1B(ZN ). A simple computation shows that, if F = F1 + · · · + Fn,F1, . . . , Fn ∈ B+ and if FiFj = 0 for i 6= j, then

Qe−〈F,Y 〉 = exp

[n∑

i=1

eF (zi)−1R(dzi)

].

All the statements of the theorem follow from this formula.

We conclude from (5.5) that (Y,Q) is an infinitely divisible random measure. Itis called the Poisson random measure with intensity R. This is an integer-valuedmeasure concentrated on a finite random set.

5.3. Poisson representation of infinitely divisible measures.

Theorem 5.3. Let (X,P ) be an infinitely divisible measure on a Luzin space Ewith the canonical measure R. Consider the Poisson random measure (Y,Q) onS = M(E) with intensity R and put X(B) =

∫M ν(B)Y (dν). The random measure

(X,Q) has the same probability distribution as (X,P ) and we have

PX ∈ C = QX ∈ C =∞∑

0

1n!e−R(M)

∫R(dν1) . . .R(dνn)1C(ν1 + · · · + νn).

(5.6)

14 Harmonic functions and exit boundaries

Proof. Note that 〈f, X〉 = 〈F, Y 〉 where F (ν) = 〈f, ν〉 and, by (5.5), we get

LX(f) = Qe−〈f,X〉 = Qe−〈F,Y 〉 = exp[−

M(1 − e−〈f,ν〉)R(dν)

]= LX(f).

This implies the first part of the theorem. The second part follows from the ex-pression Y (B) = 1B(Z1) + · · ·+ 1B(ZN ) for Y introduced in the proof of Theorem5.2.

6. Absolute continuity results for exit measures

6.1.

Theorem 6.1. Suppose that ψ satisfies conditions 2.1.A–2.1.Bn , a domain D b Eis bounded and smooth and u ∈ B+(E). Then, for every v ∈ B(∂Dn) and everyµ ∈ Mc(D),

(6.1) Pµe−〈u,XD〉

En

v(z1, . . . , zn)XD(dz1) . . .XD(zn)

= e−〈VD(u),µ〉∑

Λ

En

ρΛD(µ; z1, . . . , zn)v(z1, . . . , zn)σ(dz1) . . . σ(dzn)

where Λ runs over all distinguishable u,D-diagrams with the ordered exit set(s1, . . . , sn), ρΛ

D > 0 and σ is the surface area on ∂D.

Proof. By the multiplicative systems theorem, it is sufficient to prove (6.1) forv(z1, . . . , zn) = v1(z1) . . . vn(zn).

Note that

GuDv(x) =∫

D

guD(x, y)v(y) dy,

KuDv(x) =

∂D

kuD(x, z)σ(dz)(6.2)

where guD(x, y) is Green’s function and kuD(x, z) is the Poisson kernel for ξu in D.Both guD and kuD are strictly positive.

The set A of arrows of a diagram Λ is the union of four sets A±, A−, A0, A+

where

A± = s ∈ S−, s′ ∈ S+, A− = s ∈ S−, s

′ ∈ S0,A0 = s ∈ S0, s

′ ∈ S0, A+ = s ∈ S0, s′ ∈ S+

(6.3)

The set A± consists of arrows which are at the same time entrances and exits.Each element of this set is a connected component of the graph. Arrows which areneither exits nor entrances form the set A0.

Let a : s→ s′. Denote the labels of s and s′ by

xs, zs′ if a ∈ A±;xs, ys′ if a ∈ A−;ys, zs′ if a ∈ A+;ys, ys′ if a ∈ A0.

(6.4)

E. B. Dynkin 15

It follows from (4.22) that

(6.5) 〈LΛ(v), µ〉 =∏

a∈A±

∫µ(dxs)kuD(xs, zs′)

a∈A−

∫µ(dxs)guD(xs, ys′)

a∈A+

kuD(ys, zs′)∏

s∈S0

qa−(s)(ys) dys∏

a∈A0

guD(ys, ys′) > 0.

Therefore (6.1) follows from (4.24).

Corollary 6.1. Consider measures

σn(dzn) = σ(dz1) . . . σ(dzn)

and

ΦnD(µ,C) = Pµ

∫XD(dz1) . . . XD(dzn)1C(z1, . . . , zn).

on (∂D)n. If D is bounded and smooth, then, for every µ ∈ Mc(D), ΦnD(µ,C) = 0if and only if σn(C) = 0.

To prove this statement, it is sufficient to apply (6.1) to u = 0 and v = 1C .

Theorem 6.2. If ψ satisfies the conditions 2.1.A–2.1.B∞ and if D is an arbitrarydomain, then the class of null sets of the measure

PD(µ,C) = PµXD ∈ C(6.6)

is the same for all µ ∈ Mc(D).

Proof. It follows from (2.8) that, if D ⊂ D, then

PD(µ,C) = PµPD(XD, C).(6.7)

Formula (2.2) implies that (XD, Pµ) is an infinitely divisible measure. Denoteby RD(µ, ·) its canonical measure and put

ZD(µ) = e−RD(µ,M),

fnD(x1, . . . , xn, C) =∫

R(x1, dν1) . . .R(xn, dνn)1C(ν1 + · · · + νn)(6.8)

where R(x, ·) = R(δx, ·). It follows from (2.2) that, for every µ ∈ M,

R(µ,B) =∫

E

R(x,B)µ(dx)(6.9)

By (5.6), (6.9) and (6.8),

PD(µ,C) =∞∑

n=0

1n!ZD(µ)

∫µ(dx1) . . . µ(dxn)fmD (x1, . . . , xn;C).(6.10)

Every µ ∈ Mc(D) is concentrated on the closure of a bounded smooth domainD b D. By (6.10) and (6.7),

PD(µ,C) =∞∑

n=0

1n!PµZD(XD)

∫XD(dx1) . . . XD(dxn)fnD(x1, . . . , xn;C).(6.11)

The condition PD(µ,C) = 0 is equivalent to the condition: for every n,

ZD(XD)∫XD(dx1) . . . XD(dxn)fnD(x1, . . . , xn;C) = 0 Pµ-a.s.

16 Harmonic functions and exit boundaries

Since ZD > 0, this is equivalent to the condition: for every n,

∫XD(dx1) . . . XD(dxn)fnD(x1, . . . , xn;C) = 0.(6.12)

Let σ be the surface area on ∂D. By Corollary 6.1, the condition (6.12) holds ifand only if fnD = 0 σn-a.e. Hence the class of C such that PD(µ,C) = 0 does notdepend on µ ∈ Mc(D).

6.2. The following result follows at once from Theorem 6.2.

Theorem 6.3. Put H+(X) = H(X) \ 0. If H ∈ H+(X), then H(µ) > 0 for allµ ∈ Mc(E)

Proof. The condition H(µ) = 0 is equivalent to the condition PD(µ,C) = 0 whereC = ν : H(ν) = 0.

For every c ∈ E, we have a unique representation of H ∈ H+(X) in the formH = λH0 where H0 ∈ H(X, c) and λ is a constant. Therefore the formula (3.6)implies: every H ∈ H(X) has a unique representation

H(µ) =∫

Γc

Kγ(µ)m(dγ)(6.13)

where Kγ is an extreme element of H(X, c) corresponding to γ ∈ Γc and m is afinite measure on Γc.

We say thatH ∈ H+(X) is an extreme element of H(X) and we writeH ∈ He(X)if the conditions H ≤ H, H ∈ H(X) imply that H = const. H . If H ∈ He(X), thenλH ∈ He(X) is extreme for every λ > 0. For an arbitrary c ∈ E and everyH ∈ He(X), the ray λH intersects with He(X, c) at a single point. Therefore wecan interpret the exit space of X as the set Γ of rays in He(X).

7. Functions Hu and Hu,v

7.1. We denote by U the class of all positive functions u ∈ C2(E) such that

Lu = ψ(u) in E.(7.1)

Assume that ψ satisfies conditions:

7.1.A. For every x, ψ(x, ·) is convex and ψ(x, 0) = 0, ψ(x, u) > 0 for u > 0.

7.1.B. ψ(x, u) is continuously differentiable.

7.1.C. ψ is locally Lipschitz continuous in u uniformly in x: for every t ∈ R+,there exists a constant ct such that

|ψ(x, u1) − ψ(x, u2)| ≤ ct|u1 − u2| for all x ∈ E, u1, u2 ∈ [0, t].

Under these conditions U coincides with the class of positive locally boundedfunctions u which satisfy the condition VD(u) = u for D b E. 9 By comparing(2.2) and (2.9), we get:

Theorem 7.1. If u ∈ U , then the function

Hu(µ) = e−〈u,µ〉(7.2)

is X-harmonic.

9See [Dyn02], section 2 of Chapter 8.

E. B. Dynkin 17

Proof. By (2.2),

PµHu(XD) = e−〈VD(u),µ〉 = e−〈u,µ〉 = Hu(µ)

and therefore Hu is X-harmonic.

Function 1 −Hu is also X-harmonic.

7.2. To construct a larger family of X-harmonic functions related to u ∈ U , weuse the Green’s operator of ξu in E which can be described by the formula

GuEv(x) =∫ ∞

0

dt

E

put (x, dy)v(y)(7.3)

through the transition function puy(x, dy) of ξu or by the formula 10

GuEv(x) = Πx

∫ ζ

0

exp−

∫ t

0

u(ξs) dsv(ξt) dt(7.4)

in terms of the diffusion ξ ( ζ is the death time of ξ).

Theorem 7.2. Suppose that u ∈ U and v1, . . . , vn are positive solutions of theequation

Lv = ψ′(u)v in E.(7.5)

Define zC recursively by the formula

zC = vi for C = i,

zC = GuE [∑

r≥2

qr∑

Pr(C)

zC1 . . . zCr ] for |C| > 1(7.6)

with qr given by (4.5). If

Hu,v(µ) = e−〈u,µ〉∑

P(C)

〈zC1 , µ〉 . . . 〈zCr , µ〉 <∞ for all µ ∈ Mc(E),(7.7)

then Hu,v is an X-harmonic function.

Remark. Condition (7.5) is equivalent to the condition

KuDv = v for all D b E.(7.8)

This follows, for instance, from Theorem 6.3.1 in [Dyn02].

Proof. Consider a sequence Dn exhausting E and put

Yn = e−〈u,XDn〉∏

i∈C〈vi, XDn〉

and

Hn(µ) = PµYn.

By Theorem 4.1,

Hn(µ) = e−〈u,µ〉∑

P(C)

〈znC1, µ〉 . . . 〈znCr

, µ〉(7.9)

10Cf. (4.1).

18 Harmonic functions and exit boundaries

where znC = KuDnvi = vi for C = i and

znC = GuDn[∑

r≥2

qr∑

Pr(C)

znC1. . . znCr

] for |C| > 1.(7.10)

Note that GuDnfn ↑ GuEf if fn → f . We conclude from (7.10) that znC ↑ zC where

zC satisfy (7.6). By (7.9), Hn ↑ Hu,v . Since Hn is X-harmonic in Dn, Hu,v isharmonic in E.

Remark. We say that Λ is a u,E-diagram if qr are given by (4.5), the exit arrowsare labeled by the identity operator I and the rest of arrows are labeled by GuE .Formula (7.7) implies the following diagram expression for Hu,v :

Hu,v = e−〈u,µ〉∑

Λ

〈LΛ(v), µ〉(7.11)

where Λ runs over all distinguishable u,E-diagrams with the ordered exit set(s1, . . . , sn) [cf. (4.25)].

7.3. If sup Πxζ < ∞ (in particular, if E is a bounded domain), then GuEf isbounded for every bounded f . Therefore Hu,v is bounded for every bounded v. Forunbounded v, we have the following result:

Theorem 7.3. Let E be a bounded smooth domain and let

vi(x) = kuE(x, zi), i = 1, . . . , n(7.12)

where z1, . . . , zn are distinct points of ∂E. Then Hu,v(µ) < ∞ for all µ ∈ Mc(E).Moreover, for every ε > 0, Hu,v is bounded on the set M(Eε) where Eε = x :d(x, ∂E) > ε.

Proof. 1. Put k(x, z) = k0E(x, z) and g(x, y) = g0

E(x, y). It follows from (4.1), (4.2)and (6.2) that kuE(x, zi) ≤ k(x, zi) and guE(x, y) ≤ g0

E(x, y). Therefore it is sufficientto prove our statement for the case u = 0. We need only to check that LΛ(v) isbounded on Eε for every (u,E)-diagram Λ with the ordered exit set (s1, . . . , sn).

Put t s if there exist arrows a1 : s→ t1, a2 : t1 → t2, . . . , ak : tk → t. For everys ∈ S \ S+, denote by A(s) the set of all exits t s. For s ∈ S+, put A(s) = s.We show that, for every s ∈ S,

`(s)(x) ≤ C∏

si∈A(s)

|x− zi|1−d(7.13)

(C depends on z1, . . . , zn).2. We use the following bounds for the Poisson kernel and Green’s function (see,

e.g. [Dyn02]): for all x ∈ E, z ∈ ∂E,

k(x, z) ≤ Cd(x, ∂E)|x − z|−d(7.14)

and, for all x, y ∈ E,

g(x, y) ≤ CΓ(x− y)(7.15)

where C is a constant depending only on E and the operator L and

Γ(x) =

|x|2−d for d ≥ 3,− log |x| ∨ 1 for d = 2,1 for d = 1.

(7.16)

E. B. Dynkin 19

The bound (7.14) implies

k(x, z) ≤ C|x− z|1−d.(7.17)

Hence (7.13) holds for s ∈ S+. We prove that (7.13) holds for s if it holds for alls′ s. To this end we use the following well-known bounds for the convolutions ofnegative powers: 11 for every constants R > 0 and a, b > 0 such that a + b > d,there exists a constant C such that

|y−z|≤R

|x− y|−a|y − z|−b dy ≤ C|x − z|d−a−b.(7.18)

By (7.15) and (7.17),

E

g(x, y)|y − z|1−d dy ≤ C|x− z|1−d for x ∈ E, z ∈ ∂E.(7.19)

This implies that, for every distinct points z1, . . . , zk ∈ ∂E,

E

g(x, y)k∏

1

|y − zi|1−d dy ≤ C

k∑

1

|x− zi|1−d.(7.20)

Indeed, (7.20) is true for k = 1 by (7.19). If the minimal distance between twodistinct points zi, zj is equal to δ > 0, then every y ∈ E is at the distance ≥ δ/2from some of points zi and therefore (7.20) holds for k if it holds for k − 1.

Suppose that s /∈ S+ and let a1 : s → t1, . . . , am : s → tr is the set of all arrowsstarting from s. It follows from (4.22) and (7.20) that (7.13) is true for s if it istrue for t1, . . . , tr.

7.4. For an arbitrary domain E we have an alternative:

Theorem 7.4. Either Hu,v(µ) < ∞ for all µ ∈ Mc(E) or Hu,v(µ) = ∞ for allnon-zero µ ∈ Mc(E).

Proof. 1. The strong maximum principle 12 implies that, if a positive solutionof the equation (7.5) vanishes at some point of E, then it vanishes everywhere.If vi = 0 for some i, then Hu,v(µ) = 0 for all µ. Therefore without any loss ofgenerality we can assume that all vi are strictly positive.

2. For every positive locally bounded function f , GuEf is either locally boundedor is equal identically to ∞ (see, e.g., [Dyn98], Theorem 7.2). Since L(a) is equaleither I or GuD an analogous statement is true for all operators L(a).

3. If Hu,v(µ0) < ∞), then, for all Λ in (7.7), 〈LΛ(v), µ0〉 < ∞ and thereforeLΛ(s)(x) <∞ for all s ∈ S− and µ0-almost all x. If µ0 6= 0, then there exists x ∈ Esuch that LΛ(v)(s)(x) <∞ for all s ∈ S−.

11See, e.g., [Lan77], formula 1.1.3.12See, e.g., Theorem 3.5 in [GT98].

20 Harmonic functions and exit boundaries

4. Fix Λ and v and put s ∈ S′ if `(s) is locally bounded and s ∈ S′′ if `(s)is identically equal to ∞. Suppose that ai : s → si, i = 1, . . . , r is the set of allarrows starting from s. By (4.22), s ∈ S′′ if si ∈ S′′ for some i and, by 2 s ∈ S′ ifsi ∈ S′ for all i. Hence S′ ∩ S′′ contains s if it contains s1, . . . , sr. Since S+ ⊂ S′,we conclude that S ⊂ S′ ∩ S′′.

5. If for some diagram Λ with input v, S− ∩ S′′ 6= ∅ then, for every µ 6= 0,〈L(v), µ〉 = ∞ and therefore Hu,v(µ) = ∞. If S− ∩ S′′ = ∅ for all diagrams withinput v, then Hu,v(µ) <∞ for all µ.

7.5. Which of X-harmonic functions described above can be extreme?Clearly, Hh(µ) = 〈h, µ〉 is not extreme if an L-harmonic function h is not ex-

treme. If E = Rd and L = ∆, then constants are the only positive L-harmonicfunctions and therefore h = 1 is extreme. In the next section we show thatH1(µ) = 〈1, µ〉 is extreme in the case ψ(u) = 1

2u2. Extreme L-harmonic functions in

a bounded smooth domain E are described by the formula h(x) = kE(x, y), y ∈ ∂E.We do not know if the corresponding X-harmonic functions are extreme.

Keller [Kel57] and Osserman [Oss57] proved that for a wide class of functionsψ, there exists a maximal element umax of U . Among functions λHu, u ∈ U onlyfunctions λHumax can be extreme. Indeed, if u ∈ U , then u ≤ umax and thereforeHumax ≤ Hu. If Hu is extreme, then Hmax = const. Hu and therefore Hu =λHumax .

We claim that all functions Hu = 1−Hu, u ∈ U are not extreme. First, H0 = 1is not extreme. Indeed, Hu ≤ 1 for every u ∈ U . Assuming that 1 is extreme,we get that Hu = const. for all u ∈ U which is not true. Suppose Hu is extremefor u ∈ U \ 0. Then the relation Hu/2 ≤ Hu implies Hu/2 = const.Hu which isfalse because the functions e−t/2, e−t, 1 with t taking values in any open intervalare linearly independent.

Functions Hu,v cannot be extreme unless u is the maximal element of U andvi(x) = kuE(x, yi) for some y1, . . . , yn ∈ ∂E. A challenging problem is to explore ifthey are extreme and, if not, to find their decomposition into extreme functions.

7.6. Conditioning determined by functions Hu,v was studied by Salisbury andVerzani in [SaV99] and [SaV00]. Earlier Overbeck investigated (in [Ove93], [Ove94])H-transforms for H(µ) = 〈h, µ〉 corresponding to L-harmonic functions h. If h = 1,then this is “conditioning on non extinction” studied by Roelly-Coppuletta [RR89],Evans [Eva93], Evans and Perkins [EP90], [EP91] and Etheridge [Eth93].

8. Example of extreme X-harmonic function

8.1. In this section we prove the following theorem due to Evans [Eva93]. 13

Theorem 8.1. Suppose that X is a (L,ψ)-superdiffusion in E = Rd with ψ(u) =12u

2. The total mass H(µ) = 〈1, µ〉 is an extreme X-harmonic function.

To prove this theorem, we fix a sequence of domains Dn exhausting E andwe consider a Markov chain (Xn,Pr,µ) associated with the superdiffusion X . LetF = j(H) be the exit law corresponding to H (see section 3.6). By Remark 3.1,Theorem 8.1 will be proved if we show that PFr,µ(C) = 0 or 1 for all tail sets.

13The proof is rather involved. We simplify it a little by time discretization which makespossible to avoid the regularity problems for the paths.

E. B. Dynkin 21

To do this, we construct a Markov chain (Wn = Zn×ηn, Pr,w) in Mn×En withthe following properties:

8.1.A. The transition function P of Wn and the transition function P of Xn arerelated by the formula

PF (r, µ;n,B) =∫

Er

µ(dx)P(r, µ× x;n,B ×En)(8.1)

where

µ =µ

〈1, µ〉.

8.1.B. For every µ ∈ Mr, (ηn, Pr,µ×x) is a Markov process with the transitionfunction (3.4).

8.2. The construction is based on two lemmas. 14

Lemma 8.1. Suppose that, for every n = 0, 1, . . . , a measurable mapping ϕn froma measurable space Sn to a measurable space S′

n is given. Let Λn(y, ·) be a Markovkernel from S′

n to Sn such that, for every y ∈ S′n, the measure Λn(y, ·) is concen-

trated on ϕ−1n (y). Let a Markov transition function p in Sn is related to a Markov

transition function q in S′n by the formula

q(r, y;n,B) =∫

Sn×Sn

Λr(y, dx)p(r, x;n, dx)1B [ϕn(x)].(8.2)

If ∫

Sn

Λr(y, dx)p(r, x;n,B) =∫

S′n

q(r, y;n, dy)Λn(y, B),(8.3)

then the Markov processes (Xn,Pr,x) and (Yn,Qr,y) corresponding to p and q arerelated by the formula

Qr,yfr(Yr) . . . fn(Yn) = P∗r,yfr(ϕr(Xr)) . . . fn(ϕn(Xn))(8.4)

where 0 ≤ r < n, fi is a positive measurable function on S′i and

P∗r,y =

∫Λr(y, dx)Pr,x.

Proof. Put

Λng(y) =∫

Λn(y, dx)g(x).

First we prove, by induction in n− r, that, for every positive measurable functiong,

Qr,yfr(Yr) . . . (fnΛng)(Yn) = P∗r,yfr(ϕr(Xr)) . . . fn[ϕn)(Xn)]g(Xn).(8.5)

Then we will get (8.4) by taking g = 1.Since the measure Λn(y, ·) is concentrated on the set x : ϕn(x) = y, we have

∫Λn(y, dx)fn(ϕn(x)g(x) =

∫Λn(y, dx)f(y)g(x) =

∫f(y)Λn(y, dx)g(x)

14Lemma 8.1 is due to Rogers and Pitman (see Lemma 2 in [RP81]). Formula (8.4) impliesthat ϕ(Xn), n ≥ r is a Markov process with the transition function q and the initial distributionΛr(y, ·).

22 Harmonic functions and exit boundaries

which implies

Qr,y(fnΛng)(Yn) = ΛrFr(8.6)

where

Fr(x) = Pr,xfn[ϕn(Xn)]g(Xn).

For n− r = 0, formula (8.5) follows immediately from (8.6). For n− r > 0, by theMarkov property of Y ,

(8.7) Qr,yfr(Yr) . . . (fnΛng)(Yn)

= Qr,yfr(Yr) . . . fn−1(Yn−1)Qn−1,Yn−1(fnΛng)(Yn).

By (8.6), the right side in (8.7) is equal to

Qr,yfr(Yr) . . . (fn−1Λn−1Fn−1)(Yn−1).(8.8)

By the induction hypothesis, this is equal to

P∗r,yfr(ϕr(Xr)) . . . fn−1[ϕn−1(Xn−1)]Fn−1(Xn−1)

and (8.7) holds by the Markov property of (Xn,Pr,x).

Lemma 8.2. Let VD be the transition operator of an (L,ψ)-superdiffusion in E.For every D b E and every x ∈ E, there exists a random point (YD × ηD , Qx) inthe space M×E such that

Qxe−〈u,YD〉v(ηD) = Πx exp

∫ τD

0

VD(u)(ξs) dsv(ξτD )(8.9)

for all u, v ∈ B+.

Proof. By (2.2) and (5.4),

VD(u)(x) =∫

M(1 − e−〈u,ν〉)RD(x, dν)(8.10)

where RD is the canonical measure for (XD, Px). Fix a continuous path z =zs, 0 ≤ s ≤ a such that zs ∈ D for s ∈ [0, a) and za ∈ ∂D and consider a Poissonrandom measure (N,P z) on [0, a] ×M with intensity nz(ds, dν) = RD(zs, dν)ds.Put Λ(C) = N([0, a], C). Note that (Λ, P z) is a Poisson random measure on Mwith intensity

λ(C) =∫ a

0

RD(zs, C) ds.

By (5.5),

P z exp−

MF (ν)Λ(dν)

= exp

M(1 − e−F (ν))λ(dν)

.

If F (ν) = 〈u, ν〉, then∫

MF (ν)Λ(dν) = 〈u, YD〉

where

YD(B) =∫ a

0

N(ds, dν)ν(B).

E. B. Dynkin 23

By (8.10) and Fubini’s theorem,∫

M(1 − e−F (ν))λ(dν) =

∫ a

0

VD(u)(zs) ds.

Hence

e−〈u,YD〉 = exp−

∫ a

0

VD(u)(zs) ds.

The left side is equal to P ze−〈u,YD〉 Put ηD = ξτD and introduce a measure Qx onM×E by the formula

Qx(C) = ΠxPξ(CηD )

where Cy = (ν : (ν, y) ∈ C. For every positive measurable function f on M×E,∫f(ν, y)Qx(dν, dy) = Πx

∫f(ν, ηD)P ξ(dν).

In particular, for all u, v ∈ B+,∫e−〈u,ν〉v(y)Qx(dν, dy) = Πx exp

∫ τD

0

VD(u)(ξs) dsv(ξτD )

and therefore (YD × ηD, Qx) satisfies (8.9).

8.3. Proof of Theorem 8.1. 1. We use Lemma 8.2 to construct a process(Wn, Pr,w) subject to the conditions 8.1.A–8.1.B. Recall that Dn is a sequenceexhausting Rd and (Xn,Pr,µ) is a Markov chain associated with the superdiffusionX . Note that

Pr,µXn = 0 → 1 as n→ ∞.(8.11)

Indeed, un(x) = − logPxXDn = 0 is a solution of the equation ∆u = 12u

2 in Dn

and the limit of un is a solution in the entire space Rd. Hence it is equal to 0 (thisfollows, for instance, from Lemma 3.1 in [Dyn91a]).

2. Consider random points (Yn × ηn, Qx) in Mn ×En corresponding to Dn byLemma 8.2. Put Vn = VDn , τn = τDn . We claim that, for all m < n and allu, v ∈ B+,

Qxe−〈u,Yn〉v(ηn) = Qxe

〈u,Ym〉v(ηm)(8.12)

where

u = Vn(u), v(x) = Qxe−〈u,Yn〉v(ηn).

Since Vmu = VmVn(u) = Vn(u), it follows from (8.9) and the strong Markov prop-erty of ξ that

Qxe−〈u,Ym〉v(ηm)

= Πx

[exp

∫ τm

0

Vm(u)(ξs) ds

Πξτmexp

∫ τn

0

Vn(u)(ξs) dsv(ξτn)

]

= Πx

[exp

∫ τm

0

Vn(u)(ξs) ds

exp−

∫ τn

τm

Vn(u)(ξs) dsv(ξτn)

]

which implies (8.12).

24 Harmonic functions and exit boundaries

3. Consider two sample spaces Ω1 and Ω2 and assume that the random points(Yn × ηn, Qx) are defined over Ω1 and a superdiffusion X is defined over Ω2. Put

Zn(ω) = Yn(ω1) +Xn(ω2), η(ω) = ηn(ω1),

Qµ,x(dω) = Qx(dω1)Pµ(dω2)(8.13)

for ω = (ω1, ω2).Let Wn = Zn × ηn and let

P(r, µ× x;n,B) = Qµ,xWn ∈ B.(8.14)

It is easy to check by using (2.2) and (8.12) that P is a Markov transition functionin Mn×En. Let us demonstrate that the corresponding Markov process (Wn, Pr,w)satisfies conditions 8.1.A and 8.1.B.

By (8.13),

Qµ,xe−〈u,Zn〉v(ηn) = Qxe

−〈u,Yn〉v(ηn)Pµe−〈u,Xn〉.(8.15)

We take u = 0 and we get, by (8.9), that

Qµ,xv(ηn) = Qxv(ητn) = Πxv(ξτn).(8.16)

It follows from (8.16) that transition function P and the transition function pgiven by (3.4) are related by the formula

P(r, µ× x;n,Mn ×B) = p(r, x;n,B) for every µ.(8.17)

Put γ(µ × x) = x. We have γ(Zn × ηn) = ηn and, by Theorem 10.13 in [Dyn65],formula (8.17) implies 8.1.B.

Let us prove 8.1.A. It follows from (8.15), (2.2) and (8.9) that

(8.18) Qµ,xe−〈u,Zn〉 = Pµe

−〈u,Xn〉Qxe−〈u,Yn〉

= e−〈Vn(u),µ〉Πx exp−

∫ τn

0

Vn(u)(ξs) ds.

By (3.2), (4.7) and (4.2),

(8.19) 〈1, µ〉∫

PF (r, µ;n, dν)e−〈u,ν〉 =∫

P(r, µ;n, dν)e−〈u,ν〉〈1, ν〉

= Pµe−〈u,Xn〉〈1, Xn〉 = e−〈Vn(u),µ〉Πµ exp

∫ τn

0

u(ξs)ds.

Since we assume that ψ(u) = u2/2, we have ψ′(u) = u and u = Vn(u). Thereforethe right side in (8.19) can be obtained from the right side in (8.18) by integrationwith respect to µ and, because of (8.14),

〈1, µ〉∫

PF (r, µ;n, dν)e−〈u,ν〉 =∫

Er

Qµ,xe−〈u,Zn〉µ(dx)

=∫

Er

µ(dx)P(r, (µ, x);n, dν ×En)e−〈u,ν〉.

This implies (8.1).4. We claim that

Qµ,xg(Zn × ηn) =∫Pµ(dω1)Q0,x(dω2)g[(Xn(ω1) + Zn(ω2)) × ηn(ω2)](8.20)

E. B. Dynkin 25

for every positive measurable function g on Mn ×En. This follows from (8.15) forg(Zn × ηn) = e−〈u,Zn〉v(ηn) and it is true in the general case by the multiplicativesystems theorem.

By (8.14),

Pr,µ×xg(Zn × ηn) = Qµ,xg(Zn × ηn)

and therefore formula (8.20) implies that

Pr,µ×xg(Zn × ηn) =∫

Pr,µ(dω1)Pr,0×x(dω2)g[(Xn(ω1) + Zn(ω2)) × ηn(ω2)].

Let C be a tail set of the chain Zn × ηn. By the Markov property of this chain,

Pr,µ×x(C) = Pr,µ×xgn(Zn × ηn)

where gn(ν, y) = Pn,ν×y(C). Therefore

Pr,µ×x(C) = In + Jn

where

In =∫

Pr,µ(dω1)Pr,0×x(dω2)1Xn(ω1)=0gn[Zn(ω2) × ηn(ω2)]

= Pr,µXn = 0Pr,0×x(C)

and

Jn =∫

Pr,µ(dω1)Pr,0×x(dω2)1Xn(ω1) 6=0g[(Xn(ω1) + Zn(ω2)) × ηn(ω2)]

≤ Pr,µXn 6= 0

because g ≤ 1. By (8.11), In → Pr,0×x(C) and Jn → 0 as n→ ∞. Therefore

Pr,µ×x(C) = Pr,0×x(C).(8.21)

5. We apply Lemma 8.1 to Sn = Mn × En, the transition function p = P andto the mappings ϕn(µ× x) = µ from Sn to S′

n = Mn. Put

Λn(µ, ·) = δµ × µ

where δµ is a unit measure on M concentrated at µ.For every positive measurable function f on Sn, we put

Λn(µ, f) =∫

Sn

Λn(µ, dν × dx)f(ν, x),

P(r, ν × x;n, f) =∫

Sn

P(r, ν × x;n, dν × x)f(ν × x).(8.22)

Note that

Λn(µ, f) =∫

En

f(µ× x)µ(dx).(8.23)

If f(ν × x) = 1B(ν), then

P(r, ν × x;n, f) = P(r, ν × x;n,B ×En)

and, by applying (8.23) to f(ν × x) = P (r, ν × x, f), we get∫

Sn

Λn(µ, dν × dx)P(r, ν × x;n, f) =∫

Er

P(r, µ× x;n,B ×En)µ(dx).

26 Harmonic functions and exit boundaries

By 8.1.A, the right side is equal to PF (r, µ;n,B). Hence, the relation (8.2) holdsfor p = P and q = PF .

Let us check the condition (8.3). It is sufficient to show that

I = 〈1, µ〉∫

Λr(µ; dν × dx)P(r, ν × x;n, h× g)

is equal to

J = 〈1, µ〉∫

PF (r, µ;n, dν)Λn(ν, h× g)

for u, g ∈ B+(E) and h(ν) = e−〈u,ν〉. By (8.23) applied to

f(ν × x) = P(r, ν × x;n, h× g),

we get

I =∫

P(r, µ× x;n, h× g)µ(dx)

and, by (8.14), (8.15), (2.2), (4.7) and (4.2), we obtain

(8.24) I =∫Qµ,x[h(Zn)g(ηn)]µ(dx) = Qxe

−〈u,Yn〉Pµe−〈u,Xn〉

= e−〈VD(u),µ〉Πµ exp−

∫ τn

0

VD(u)(ξs) dsg(ξτn).

On the other hand,

Λn(ν, h× g) = h(ν)〈g, ν〉/〈1, ν〉.Recall that Fn(µ) = 〈1, µ〉. Therefore

J =∫

P(r, µ;n, dν)h(ν)〈g, ν〉 = Pr,µe−〈u,Xn〉〈g,Xn〉.

We get from (3.5), (4.7) and (4.2) an expression for J identical with the right sidein (8.24). Hence I = J .

6. Note that, for every bounded measurable function Φ,

(8.25) PFr,µΦ(Xr, Xr+1, . . . )

=∫

Λr(µ, dν × dy)Pr,ν×yΦ(Zr, Zr+1, . . . )

=∫µ(dy)Pr,µ×yΦ(Zr, Zr+1, . . . ).

Since ϕn(Zn × ηn) = Zn, this follows from (8.4) for Φ = f1(ν1)f2(ν2) . . . . Byapplying the multiplicative systems theorem, we get the general formula .

7. Formula (8.25) means that the probability law of Xn, n ≥ r under PFr,µ is thesame as the probability law of Zn, n ≥ r under P∗

r,µ =∫µ(dy)Pr,µ×y. Therefore to

prove that PFr,µ(C) = 0 or 1 for every tail set C, it is sufficient to establish thatP∗r,µ(C) = 0 or 1. By (3.3) and (8.21),

1C = lim Pn,Zn×ηn(C) = lim Pn,0×ηn(C) P∗r,µ-a.s.(8.26)

The probability law of ηn, n ≥ r under P∗r,µ coincides with the probability law of

ξn, n ≥ r under Πµ =∫µ(dy)Πx and therefore (8.26) implies that P∗

r,µ(C) = 0 or1.

E. B. Dynkin 27

References

[Daw93] D. A. Dawson, Measure-valued Markov processes, Ecole d′Ete de Probabilites de SaintFlour, 1991, Springer, 1993, Lecture Notes in Math., vol. 1541, pp. 1–260.

[Dyn65] E. B. Dynkin, Markov Processes, Springer-Verlag, Berlin/Gottingen/Heidelberg, 1965.[Dyn78] E. B. Dynkin, Sufficient statistics and extreme points, Ann. Probab. 6 (1978), 705–

730, [Reprinted in: E. B. Dynkin, Markov Processes and Related Problems of Analysis,London Math. Soc. Lecture Note Series 54, Cambridge University Press, Cambridge,1982.].

[Dyn88] E. B. Dynkin, Representation for functionals of superprocesses by multiple stochasticintegrals, with applications to self-intersectional local times, Asterisque 157-158 (1988),147–171.

[Dyn91a] E. B. Dynkin, A probabilistic approach to one class of nonlinear differential equations,Probab. Th. Rel. Fields 89 (1991), 89–115.

[Dyn91b] E. B. Dynkin, Branching particle systems and superprocesses, Ann. Probab. 19 (1991),1157–1194.

[Dyn93] E. B. Dynkin, Superprocesses and partial differential equations, Ann. Probab. 21 (1993),1185–1262.

[Dyn98] E. B. Dynkin, Stochastic boundary values and boundary singularities for solutions of theequation Lu = uα, J. Functional Analysis 153 (1998), 147–186.

[Dyn02] E. B. Dynkin, Diffusions, Superdiffusions and Partial Differential Equations, AmericanMathematical Society, Providence, R.I., 2002.

[EP90] S. N. Evans and E. Perkins, Measure-valued Markov branching processes conditioned onnon-extinction, Israel J. Math 71 (1990), 329–337.

[EP91] S. N. Evans and E. Perkins, Absolute continuity results for superprocesses with someapplications, Transact. Amer. Math. Soc. 325 (1991), 661–681.

[Eva93] S. N. Evans, Two representations of a conditioned superprocess, Proc. Roy. Soc. Edin-burgh Sect. A 123 (1993), 959–971.

[Eth93] A. M. Etheridge, Conditioned superprocesses and a semilinear heat equation, Seminaron Stochastic processes, 1992, Birkhauser, 1993, pp. 91–99.

[GT98] D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order,2nd edition, 3d printing, Springer, Berlin/Heidelberg/New York, 1998.

[Kal77] O. Kallenberg, Random Measures, 3rd. ed., Academic Press, New York, 1977.[Kel57] J. B. Keller, On the solutions of ∆u = f(u), Comm. Pure Appl. Math. 10 (1957),

503–510.[Lan77] N. S. Landkov, Foundations of Modern Potential Theory, Springer,Berlin, 1972.[Ms 02] B. Mselati, Classification et representation probabiliste des solutions positives de ∆u =

u2 dans un domaine, These de Doctorat de l’Universite Paris 6 (2002).[Oss57] R. Osserman, On the inequality ∆u ≥ f(u), Pacific J. Math, 7 (1957), 1641–1647.[Ove93] I. Overbeck, Conditioned super-Brownian motion, Probab. Th. Rel. Fields, 96 (1993),

546–570.[Ove94] I. Overbeck, Pathwise construction of additive H-transforms of super-Brownian motion,

Probab. Th. Rel. Fields, 100 (1994), 429–437.[RP81] L. C. G. Rogers and J. W. Pitman, Markov functions, Ann. Probab., 9 (1981), 573–582.[RR89] S. Roelly-Coppoletta and A. Roualt, Processus de Dawson-Watanabe conditionne par le

future lointain, C. R. Acad. Sci. Paris, 309,Serie I, (1989), 867–872.[SaV99] T. S. Salisbury and J. Verzani, On the conditioned exit measures of super-Brownian

motion, Probab. Th. Rel. Fields 115 (1999), 237–285.[SaV00] T. S. Salisbury and J. Verzani, Non-degenerate conditioning of the exit measures of

super-Brownian motion, Stochastic Proc.Appl 87 (2000), 25–52.

E. B. Dynkin, Department of Mathematics, Cornell University, Ithaca, NY 14853,USA

E-mail address: [email protected]