parabolic anderson problem and intermittency

138

Upload: others

Post on 11-Sep-2021

26 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Parabolic Anderson Problem and Intermittency
Page 2: Parabolic Anderson Problem and Intermittency

Recent Titles in This Series 518 Rene A. Carmona and S. A. Molchanov, Parabolic Anderson problem and intermittency,

1994 517 Takashi Shioya, Behavior of distant maximal geodesies in finitely connected complete

2-dimensional Riemannian manifolds, 1994 516 Kevin W. J. Kadell, A proof of the ^-Macdonald-Morris conjecture for BCn, 1994 515 Krzysztof Ciesielski, Lee Larson, and Krzysztof Ostaszewski, J-density continuous

functions, 1994 514 Anthony A. Iarrobino, Associated graded algebra of a Gorenstein Artin algebra, 1994 513 Jaume Llibre and Ana Nunes, Separatrix surfaces and invariant manifolds of a class of

integrable Hamiltonian systems and their perturbations, 1994 512 Maria R. Gonzalez-Dorrego, (16,6) configurations and geometry of Kummer surfaces in

P3 , 1994 511 Monique Sable-Tougeron, Ondes de gradients multidimensionnelles, 1993 510 Gennady Bachman, On the coefficients of cyclotomic polynomials, 1993 509 Ralph Howard, The kinematic formula in Riemannian homogeneous spaces, 1993 508 Kunio Murasugi and Jozef H. Przytycki, An index of a graph with applications to knot

theory, 1993 507 Cristiano Husu, Extensions of the Jacobi identity for vertex operators, and standard

^^-modules, 1993 506 Marc A. Rieffel, Deformation quantization for actions of Rd, 1993 505 Stephen S.-T. Yau and Yung Yu, Gorenstein quotient singularities in dimension three,

1993 504 Anthony V. Phillips and David A. Stone, A topological Chern-Weil theory, 1993 503 Michael Makkai, Duality and definability in first order logic, 1993 502 Eriko Hironaka, Abelian coverings of the complex projective plane branched along

configurations of real lines, 1993 501 E. N. Dancer, Weakly nonlinear Dirichlet problems on long or thin domains, 1993 500 David Soudry, Rankin-Selberg convolutions for S02^+i x GLW: Local theory, 1993 499 Karl-Hermann Neeb, Invariant subsemigroups of Lie groups, 1993 498 J. Nikiel, H. M. Tuncali, and E. D. Tymchatyn, Continuous images of arcs and inverse

limit methods, 1993 497 John Roe, Coarse cohomology and index theory on complete Riemannian manifolds,

1993 496 Stanley O. Kochman, Symplectic cobordism and the computation of stable stems, 1993 495 Min Ji and Guang Yin Wang, Minimal surfaces in Riemannian manifolds, 1993 494 Igor B. Frenkel, Yi-Zhi Huang, and James Lepowsky, On axiomatic approaches to

vertex operator algebras and modules, 1993 493 Nigel J. Kalton, Lattice structures on Banach spaces, 1993 492 Theodore G. Faticoni, Categories of modules over endomorphism rings, 1993 491 Tom Farrell and Lowell Jones, Markov cell structures near a hyperbolic set, 1993 490 Melvin Hochster and Craig Huneke, Phantom homology, 1993 489 Jean-Pierre Gabardo, Extension of positive-definite distributions and maximum entropy,

1993 488 Chris Jantzen, Degenerate principal series for symplectic groups, 1993 487 Sagun Chanillo and Benjamin Muckenhoupt, Weak type estimates for Cesaro sums of

Jacobi polynomial series, 1993 486 Brian D. Boe and David H. Collingwood, Enright-Shelton theory and Vogan's problem

for generalized principal series, 1993

Page 3: Parabolic Anderson Problem and Intermittency

This page intentionally left blank

Page 4: Parabolic Anderson Problem and Intermittency

MEMOIRS -i-V A of the

American Mathematical Society

Number 518

Parabolic Anderson Problem and Intermittency

Rene A. Carmona S. A Molchanov

March 1994 • Volume 108 • Number 518 (third of 5 numbers) • ISSN 0065-9266

American Mathematical Society Providence, Rhode Island

Page 5: Parabolic Anderson Problem and Intermittency

1991 Mathematics Subject Classification. Primary 60H15, 60H25; Secondary 60F10, 60G15, 60K40.

Library of Congress Cataloging-in-Publication Data Carmona, R. (Rene)

Parabolic Anderson problem and intermittency/Rene A. Carmona, S. A. Molchanov. p. cm. - (Memoirs of the American Mathematical Society, ISSN 0065-9266; no. 518)

"Volume 108, number 518 (third of 5 numbers)." Includes bibliographical references. ISBN 0-8218-2577-1 1. Stochastic partial differential equations. 2. Random operators. 3. Gaussian processes.

I. Molchanov, S. A. (Stanislav A.) II. II. Title. III. Series. QA3.A57 no. 518 [QA274.25] 510s-dc20 93-48271 [519.2] CIP

Memoirs of the American Mathematical Society This journal is devoted entirely to research in pure and applied mathematics.

Subscription information. The 1994 subscription begins with Number 512 and consists of six mailings, each containing one or more numbers. Subscription prices for 1994 are $353 list, $282 institutional member. A late charge of 10% of the subscription price will be imposed on orders received from nonmembers after January 1 of the subscription year. Subscribers outside the United States and India must pay a postage surcharge of $25; subscribers in India must pay a postage surcharge of $43. Expedited delivery to destinations in North America $30; elsewhere $92. Each number may be ordered separately; please specify number when ordering an individual number. For prices and titles of recently released numbers, see the New Publications sections of the Notices of the American Mathematical Society.

Back number information. For back issues see the AMS Catalog of Publications. Subscriptions and orders should be addressed to the American Mathematical Society, P. O. Box

5904, Boston, MA 02206-5904. All orders must be accompanied by payment. Other correspondence should be addressed to Box 6248, Providence, RI 02940-6248.

Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgement of the source is given.

Republication, systematic copying, or multiple reproduction of any material in this publication (including abstracts) is permitted only under license from the American Mathematical Society. Requests for such permission should be addressed to the Manager of Editorial Services, American Mathematical Society, P. O. Box 6248, Providence, RI 02940-6248. Requests can also be made by e-mail to r e p r i n t - p e r m i s s i o n @ m a t h . a m s . org.

The owner consents to copying beyond that permitted by Sections 107 or 108 of the U.S. Copy­right Law, provided that a fee of $1.00 plus $.25 per page for each copy be paid directly to the Copyright Clearance Center, Inc., 222 Rosewood Dr., Danvers, MA 01923. When paying this fee please use the code 0065-9266/94 to refer to this publication. This consent does not extend to other kinds of copying, such as copying for general distribution, for advertising or promotion purposes, for creating new collective works, or for resale.

Memoirs of the American Mathematical Society is published bimonthly (each volume consisting usually of more than one number) by the American Mathematical Society at 201 Charles Street, Providence, RI 02904-2213. Second-class postage paid at Providence, Rhode Island. Postmaster: Send address changes to Memoirs, American Mathematical Society, P. O. Box 6248, Providence, RI 02940-6248.

© Copyright 1994, American Mathematical Society. All rights reserved. Printed in the United States of America.

This volume was printed directly from author-prepared copy. @ The paper used in this book is acid-free and falls within the guidelines

established to ensure permanence and durability. I J Printed on recycled paper.

10 9 8 7 6 5 4 3 2 1 99 98 97 96 95 94

Page 6: Parabolic Anderson Problem and Intermittency

Contents

I INTRODUCTION 1

II EXISTENCE A N D UNIQUENESS PROBLEMS 17 11.1 The Deterministic Problem 17

II. 1.1 The Feynman-Kac Representation 18 11.1.2 Special Notations 19 11.1.3 Existence Problems 20 II. 1.4 Uniqueness 25

11.2 The Random Case 27 11.2.1 Setting of the Problem 27 11.2.2 Continuity of the Feynman-Kac Formula 29 11.2.3 The Case of a Homogeneous Potential Field 35 11.2.4 The Case of a White Noise Potential 37 11.2.5 A Diffusion Limit Approximation Result 40

11.3 Existence and Equations for the Moments 44

III MOMENT LYAPUNOV EXPONENTS A N D INTERMITTENCY 49 111.1 The White Noise Case 50

111.1.1 Existence of the Moment Lyapunov Exponents 50 111.1.2 An Explicitly Solvable Model 56 111.1.3 First General Properties 61 111.1.4 Small k Behavior of yp(n) 64 111.1.5 Large K Behavior of 7P (K) 64 111.1.6 Asymptotic Behavior of the Critical Diffusion Constant . . . . 73 111.1.7 Summary 75

111.2 The Case of Finite Correlation Length 77 III.2.1 Existence of 7p(<r) 79

v

Page 7: Parabolic Anderson Problem and Intermittency

VI CONTENTS

111.2.2 Estimation ofjp(a)/p 82 111.2.3 Continuity Results 86 111.2.4 Lyapunov Exponents as Functions of K 87 111.2.5 Another Explicitly Solvable Model 92

IV ALMOST SURE LYAPUNOV EXPONENTS 99 IV.1 Existence 99 IV.2 Proof of the Lower Bound 106 IV.3 Proof of the Upper Bound I l l

V CONCLUDING REMARKS 121

Page 8: Parabolic Anderson Problem and Intermittency

Abstract

We consider the stochastic partial differential equation

9u A ^ , x — = KAu + Zt(x)u.

The potential £t(x) is assumed to be a mean zero homogeneous Gaussian field. We pay special attention to the white noise case. In order to minimize the technical difficulties we consider only the case the discrete Laplacian A on the lattice TL . We prove existence and uniqueness (for almost every realization of the random potential) for nonnegative initial conditions. These results are proved by means of the Feynman-Kac representation of the minimal solutions. Infinite dimensional Ito and Stratonovich equations are needed to study the white noise case. We then prove that the solutions have moments of all orders. In the case of a white noise potential we derive a family of closed equations for these moments. We then prove the existence of the moment Lyapunov exponents and we study their dependence upon the diffusion constant K. AS a consequence, we show that there is full intermittency of the solution when the dimension d is not greater than 2 while the same intermittency only holds for large values of the diffusion constant in higher dimensions. The fundamental equation can be viewed as a parabolic Anderson model and this phase transition is natural from the point of view of localization theory. Finally, the last chapter is devoted to the study of the almost sure Lyapunov exponent. We prove its existence and we derive their asymptotic behavior for small K.

Key words and phrases /Random Parabolic Equation, Nonstationary Anderson Problem, Large Time Asymptotics, Intermittency.

vn

Page 9: Parabolic Anderson Problem and Intermittency

This page intentionally left blank

Page 10: Parabolic Anderson Problem and Intermittency

Chapter I

INTRODUCTION

The subject of the present monograph is the investigation of the asymptotic properties of the solutions of the parabolic partial differential equation:

-£ = KAU + tt(x)u (1.1)

where £t(x) is a random potential. This potential may be time dependent and we shall not rule out the possibility that it is a Schwartz distribution instead of a function. This equation plays an important role in chemichal kinetics. But it can also be regarded as a particular case of more realistic equations. We have in mind, for example, the equation of the magnetic field in a random flow. This equation reads:

8H — = KAH + (v(t, x)V) H - (H, V)v. (1.2)

Equation (1.2) differs from (1.1) in two respects. First it is a vector equation instead of a scalar one. Second it has a first order term. Nevertheless, one expects that the results of the qualitative analysis will be the same. There is also some analogy between these equations and the heat equation which describes the time evolution of the temperature field in the system atmosphere-ocean. We now give a simple derivation of equation (1.1). It is presented as an example of a noninteracting particle system.

Received by the editor May 14, 1992.

1

Page 11: Parabolic Anderson Problem and Intermittency

2 RENE A. CARMONA AND S.A. MOLCHANOV

Let us consider an ensemble of particles and let us denote by a;,'s the initial positions of these particles. Let us assume that *,- G IR*. We are mostly interested in the case d = 3 but it will be interesting to consider the effects of the dimension d on the results of our mathematical investigations. We choose to represent configurations of particles by point measures. Our initial configuration is of the form:

»»(*) = £ M * ) i where we use the notation 8X for the unit mass at the point x. We now describe the dynamics of the system. We assume that each particle has a Brownian motion with diffusion coefficient K and that the various particles move independently of each other. We also assume that, at each time t any particle at the location x splits into two identical particles (which start independent Brownian motions from the point x) with rate ^f{x)dt and that the same particle dies (i.e. disappears) with rate £^~(x)dt. The configuration of particles at time t is then denoted by:

%(*) = £**(*)(*)• A first approach to the time evolution of the ensemble of particles is to consider the occupation measures. For each domain D one sets:

lt(D) = #{i ; xi(t) e D)

for the number of particles in D at time t. For each domain D and each time t, rjt(>) is a random measure and it is natural to investigate the properties of its average:

d1](D) =< m(D) > It is easy to show that this mean value, when regarded as a function of the domain D for t fixed, is in fact a measure which has a density. We denote this density by mi(t ,x) . In other words we have:

/ii1)(2?)= / mi(t,z)dx. JD

This density has the interpretation of the mean density of particles at time t at the point x. It is possible to prove that this function satisfies the partial differential equation (1.1) in the sense that:

dmf'x) = KAxrni(t, x) + Zt{x)mi{i, x). (1.3)

Page 12: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 3

provided we set:

Adding an initial condition to this equation requires some assumption on the initial distribution of the particles. It has many applications despite its simplicity. Let us assume for example tha t the particles at time t = 0 are distributed throughout the whole entire space according to a Poisson point process with intensity A. Then it is clear that the initial mean density rai(0,z) is a constant independent of x because of the homogeneity of the point process. This constant is equal to A. So in this particular case the initial condition for the equation (1.3) is:

m i (0 ,x ) = A.

The model which we described above is the simplest one for noninteracting particles. For example it can be used as a mathematical model for the diffusion of plankton in the upper layer of the ocean. In this case, if one denotes by u(t , x) the concentration of plankton at time t at the location x where this location is measured in the moving frame following the stream of the large scale motion of the ocean currents, then equation (1.1) can be shown to be appropriate for this function u{t) x). The choice of a moving coordinate system implies that we do not consider large scale (random) currents in this equation. The short time and local correlation parts of the ocean currents and the molecular Brownian motion are brought into the model via the diffusion par t of the partial differential operator given by KA. The evolution of a given plankton colony depends strongly upon the temperature and the salinity of the ocean, upon the sun irradiation and upon most of the local characteristics of the ocean. Consequently, it is reasonable to assume that the plankton concentration satisfies an equation of the type (1.1) with a random potential &(z) homogeneous in the space variable x. Moreover, the time correlation of this potential process is very short and it seems possible to assume that this correlation is actually a delta function or in other words that it is a white noise in time. This assumption leads to potential fields with singularities depending on the dimension. Some of these problems (existence of solutions, moment equations, intermittency, . . . ) are considered in [8] while the large time behavior of the solutions is studied in [5] for time independent random potentials.

The above discussion concerns the continuous case where the space variable x runs through the continuous space H . This generality creates some difficult technical problems the solutions of which obscure the simple nature of some qualitative phe­nomena. The present memoir is devoted to the analysis of this problem in the case of the discrete lattice THd instead of the continuous case I t . We shall thus consider

Page 13: Parabolic Anderson Problem and Intermittency

4 RENE A. CARMONA AND S.A. MOLCHANOV

equation (I.l) for i > 0 and x G 22 the operator A being the usual discrete analog of the Laplacian. It is defined by:

[A/](x)= £ [ / ( y ) - / ( « ) ] (1.4) |x-y |=l

We shall see that all the qualitative properties which are expected on physical grounds are actually present in this approximate mathematical model. We shall not at­tempt to discretize the time variable t. The random potential becomes a family {&(z); * > 0, x G 7Ld} of random variables which we assume to form a mean zero homogeneous ergodic process. The homogeneity assumption says that, for each fi­nite set {(*i,2i), • • • ,(tn , xn)} and for each t > 0 and x G 2Ld the random vectors (&i(*i)> • • • ,&„(*n)) and (&!+*(* + xi), • • • ,&„+*(* + xn)) have the same distribu­tion. It is important to notice that we call homogeneous what is very often called stationary. The reason for our choice is to avoid misunderstandings when the word stationary is used with the meaning it has in the theory of dynamical systems and differential equations. Stationarity has to do with time independence in all cases but we shall restrict its use to mean that the coefficients of an equation are independent of time. The zero mean assumption is not restrictive. Indeed, this mean has to be constant because of the homogeneity assumption and if this constant was not zero, the solution of the problem (I.l) would be obtained by multiplying the solution of this (similar) problem with the centered potential process by the exponential of t times the mean. This would modify some of the results below but only in a trivial way. We shall not recall the definition of ergodicity. The most important tool in the study of a homogeneous process (also called afield) is its covariance. Because of the homogeneity and the mean zero assumption the covariance of the potential £t(x) is given by the formula:

r ( < , x ) = < ^ ( y ^ + i ( y + x ) >

where we use the notation < • > to denote the expectation over the probability space on which the random field {£t(x)\ t > 0, x G 2Zd} is defined. The above right hand side does not depend on s or y because of the homogeneity assumption. From the physical point of view there are two correlation scales. They can be given a precise mathematical meaning when the correlation function T decays fast enough. If this correlation function is exponentially decaying in the time and the space variables one defines the time correlation length r and the space correlation length L as the inverses of the corresponding rates of exponential decay. For the sake of definiteness, one

Page 14: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 5

defines L by the formula:

L = inf{£' ; | r ( 0 , * ) | < i r ( 0 , 0 ) \£\>L'}

and r as the number:

r = inf{r ' ; | r ( s , 0 ) | < ^ ( 0 , 0 ) s>r'}.

These correlation lengths can be understood in the following way. If one considers the processes {£t(Lx); t > 0} for fixed x G 2Z , then the definition of L makes their correlation very weak and we can assume, at least at the heuristic level, tha t they are essentially independent. For the sake of the present discussion we shall assume tha t L = 1. This does not change qualitatively the phenomena which we expect and it will facilitate the discussion. Since we are working with the lattice approximation, the diffusion is governed by the finite difference operator (III.12). Since we keep the time as a real variable without discretization, we now have two different time scales.

• r the time correlation length introduced above.

• T\ the average time it takes for a particle to exit a space correlation cell. r\ is consequently of the order of L2/K. Also, T\ can be viewed as the mean time between two successive jumps of the continuous time random walk governed by the discrete Laplacian (III. 12) since we assume that L = 1.

The relative values of these two time scales play a crucial role in the characteristics of the system. Let us consider the various possibilities.

• when r is much smaller than r\) i.e. when T/T\ <C 1, the time correlation is very much like a delta function at the origin and the field £t(x) can be approximate by an independent family (parametrized by x £ 7L ) of white noise processes {&(#); * > 0}. In this case, the equation (LI) can be rewritten as a stochastic integral equation driven by an infinite dimensional Wiener process. It can then be investigate as a discrete analog of a stochastic partial differential equation.

• when r is much larger than T\) i.e. when T/T\ ^> 1, one says that one is in the stationary adiabatic regime. Indeed, the characteristics of the potential &(x) vary very slowly and one expects that one can use the results of the stationary case (i.e. time independent case) as a good approximation. We shall pay a special attention to this case in the present paper.

Page 15: Parabolic Anderson Problem and Intermittency

6 RENE A. CARMONA AND S.A. MOLCHANOV

• finally in the intermediate case where r and T\ are of the same order is difficult to study the transition between the two regimes. We shall not discuss this case here.

The present paper is devoted to the mathematical study of the stochastic parabolic problem:

r | H = KAU+&(*)« \ «(0 ,«) = 1. ( L 5 )

It can be viewed as a parabolic Anderson problem with a possibly time dependent potential. We describe below three different problems in which this Anderson parabolic problem comes up naturally.

Branch ing P r o c e s s in R a n d o m Env ironment

The fundamental equation (1.5) can be viewed as an infinite system of coupled stochas­tic (difference) equations. We spend some time on the interpretation of this equation in terms of branching processes and we discuss in details its connection with reaction-diffusion equations in general and more precisely the so-called Kolmogorov-Petrovskii-Piskunov equation in particular.

As explained before we restrict ourselves to the particular case of a system of noninter-acting Brownian particles on the lattice % . The dynamics of the system of particles is given by two functions ^(t^x) and £~( t ,x ) . They represent respectively the rates at which a particle being at x £ TL at time t splits into two identical particles or dies. One also needs to know the diffusion constant K which determines the holding time at each point of the lattice: a particle at time t at x £ rEd jumps during the time interval [t}i + di) to any of its 2d neighboring point x' (i.e. such tha t \xf — x\ — 1) with probability ndt. We denote by n ( t ,x ) the number of particles being at the site time x of the lattice TL at time t. We compute its moment generating function. For s < t, x, y G 2K and a > 0 we set:

Ma(s,x,t,y) = m{e~an^ \ n(t, • ) = 6y( • ) } . (1.6)

It is then easy to derive, for ( t ,y) fixed the backward Kolmogorov's equation:

0 = ^ + KAxMa + £ + ( S ) z ) M 2 - (t+(s,z) + C(s,*))Ma+C(*,*) (1-7) as

Page 16: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 7

which is satisfied for s < t and which is to be complemented with the terminal condi­tion:

M„(t, *,t, y) = e-Q6y(x) + (1 - «„(*)).

Unfortunately this equation may not have a unique solution. In any case, if we denote by mi(s,x,f ,y) the first moment of n(t,x) under the same condition, i.e.

mi(s ,x , t ,y) = E{n(s,z) | n(t, •) = «„( . )}

then, using the fact that:

mi(s ,x , t ,y) = -— aa |a=o

one gets the following equation for mi:

0 = -£± + KAxrm + K+(fi,x) - r ( « , x ) ] m i (1.8) as

with the terminal condition:

™l(t|*)*,y) = M*)-One gets similar equations for the higher moments by taking more derivatives. For example the equation for the second moment:

m 2(s ,z , t ,y) = E{n(i,y)2 | n(s,x) = 8y(x)}

is obtained by taking one extra derivative. It reads:

0 = ^ 1 + KAxm2 + (£+(s, x) -£- ( 8 , x))m2 - {+(«,x)m\ (1.9) OS

with the same initial-boundary condition:

rn2(i)x)t)y) = 8y(x).

The first part of the right hand side is the same as for mi. The only difference comes the term ^(s)x)ml(sixyt) y). Note that this last term can be viewed as a source term if the equation for the first moment has already been solved. As we already explained, similar equations can be derived for the higher moments:

mp(s ,x, t ,y) = ( - l ) p

da? |a=o*

Page 17: Parabolic Anderson Problem and Intermittency

8 RENE A. CARMONA AND S.A. MOLCHANOV

These equations are not very easy to deal with because of the boundary condition which assume the knowledge of the number of particles at the terminal time t. Moreover, this condition states that there is a single particle at a specific point. In particular, this is far from the desirable condition:

n(s,;r) = 1.

This is the main reason for an investigation of the more complicated forward equation which we derive now. In order to do so we need to consider the number of particles n(s, x) as a function n(s) of x £ % . If a = a(x) is a function with compact support (i.e. a(x) = 0 except for finitely many x) then the duality:

< a, n(s, • ) > = ] P a(x)n(s, x) X

makes sense and the moment generating function Ma can be defined as a Laplace transform on a functional space by the formula:

M a ( M ) = E{c-< a*nW> | n(s)}. (1.10)

We use the same arguments as before to derive the Kolmogorov's equation. But this time we give details because the results seem to be less standard. We consider the three possible transitions for the number of particles in an infinitesimal time interval [t,t + di).

• n(t, y) —• n(t,y) + 1 with probability

Z+(t,y)n{t,y)dt+ J2 Kn(t,y')dt + 0(dt2) |y'-y|=i

• n(*>y) —> n(*>y) ~~ 1 with probability

C (*> yM*> y)dt + 2dnn(t} y)dt + 0(di2)

• n(*>y) —• n(^}y) with probability

l-Z+{t,y)n(t,y)dt- £ Kn(t,y')dt - C(t,y)n(t,y)dt \v'-v\=i

-2dKn(t,y)dt + 0(dt2)

Page 18: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 9

Next we notice that, up to terms of order 0(dt2), the n(t + dt,y) — n(t,y) may be regarded as independent for different sites y. Consequently, one easily checks that:

M, a(s,t + dt) - MQ(s,t) = - E { e - < a ' n W > £ (e-a<»> - l ) [£+(t,y)n(t,y) y

+ £ Kn(t,yO] + ]T( e a ( y )^ lv'-vl=i y

If one notices that:

dMa = ~E{e-< a ' n W>n(t ,y)} 9a(y)

one immediately gets the (formal) equation for Ma:

= E ( . ~ « - l ) [ f < « , , ) ^ ( U1)

|y'-y|=l Ky ' V yy/

with the initial condition Ma(s) s) = e a ^ if one wants to consider the initial condition n(s,y) = <5r(y), or:

M a(s ,s) = exp[]Ta(x)]

if one wants to consider the initial condition n(s,y) = 1. In any case, equation (1.11) makes it possible to derive equations for the moments and the correlation coefficients of the field n(t, y). Let us consider for example the case s = 0 and n(0, x) = 1. Then we have:

mi(*,y) = < n(t,y) > = — T T T T

and:

d2Ma m2(t,x,t/) = < n(i, x)n(t, y) > = 5a(*)ao(y)ia = s 0

Page 19: Parabolic Anderson Problem and Intermittency

10 RENE A. CARMONA AND S.A. MOLCHANOV

and consequently, we have first:

^ = KAm1 + [t+(t,y)-C(t,y)}rni (1.12)

with initial condition mi (0 ,y ) = 1. Notice that equation (1.12) is exactly the funda­mental equation (1.1) the investigation of which is the main topic of the present paper. Taking one more derivative we get:

^ = K(Ax + Ay)m2 + [^(tix) + ^(t)y)^r(t,x)-r(t,y)]rn2

+ &x(y) \(Z+(i)x)+C(i,x) + (*Kd)rnl(i)x) + K ] T roi(t,s') (1.13) L |x'-:r|=l J

with initial condition m i ( 0 , x , y ) = 1. This equation is remarkably different from the equation we obtained for the second moment from the backward Kolmogorov equation.

Most of the results derived in the present paper concern the situation where the field £(i) y) = £+(f, y) — £~(i, y) is some sort of white noise in time for each fixed y. This assumption is justified because it appears naturally as a limiting case of Gaussian fields with short time correlations. In this case, the equation for mi can be given a rigorous meaning: indeed, it can be rewritten as a stochastic integral equation of the Ito's type in an infinite dimensional space. Unfortunately, the equations for rn2(t,y) and m2( t ,x ,y ) do not make sense in this limit because we have ^ ( ^ y ) = oo and £+(*j y) + £~(*>y) = ° ° identically. Nevertheless, there are very interesting cases for which the equations for all the moments of nft^y) make sense. This is the case for example when £^(t}y) is the time derivative of a Levy process (i.e. a process with stationary independent increments) with sample paths of bounded variations, the simplest possible case being obviously a difference of two Poisson processes. These models are considered in [1].

H y d r o d y n a m i c s and Burger ' s Equat ion Burger's equation is a fundamental equation of hydrodynamics and astrophysics. It describes self gravitating media where, attraction takes place instead of the repulsion between the liquid particles (as it is the case for ordinary liquids such as water). It has the form:

M ^ + (*(*, *) • V)v(t,x) = KAv(t, x) + / ( * , x) (1.14)

Page 20: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 11

cuv\v(t, x) = 0

v{Oyx) = v0(x), xeU. (1.15)

Here i^(t,x) is the velocity field of the fluid, f(t,x) is an external force, K > 0 is the viscosity, t > 0 is the time and the space variable x G Md varies in d = 1, 2 or 3 dimensions. The pressure is absent because the nonlinear term (#(£,£) • V)t?(t,£) is a gradient. Burger's equation has the same form as the Navier-Stokes equation. The difference concerns the class of velocity fields. Navier-Stokes equation is concerned with incompressible velocity fields satisfying div{T(t, x) = 0 while Burger's equation deals with potential fields. Because of this restriction, the nonlinear equation (1.14) can be reduced to a linear equation via the substitution:

v(t, x) = — 2 K V log <p(t, x).

Indeed, equation (1.14) becomes the following (linear) parabolic equation:

M ^ = KA<p{t, x) + F(t, x)<p(t, x) (1.16)

with an appropriate initial condition. The new potential F ( t , x ) satisfies:

VF(t,x) = f(t,x).

and the initial condition becomes:

-2KV<PQ(X) = v0(x).

Consequently, when the original external force is random, equation (1.16) can be viewed as a particular case of the so-called Anderson parabolic problem. Notice tha t the potential F can be time-dependent. The solution of Burger's equation can then be written in the form:

and the analysis of the asymptotic behavior of t7(tf, x) as t —• oo reduces to the asymp­totic analysis of ip(t}x) and its gradient.

A d v e c t i o n - C o n v e c t i o n Equat ion for t he T e m p e r a t u r e Field Let us consider the following parabolic equation for the passive scalar field T(tyx):

Page 21: Parabolic Anderson Problem and Intermittency

12 RENE A. CARMONA AND S.A. MOLCHANOV

dT(t,x) dt

T(0,x) = T0(x),

+ v(t, x) • WT(t, x) = KAT(t, x) (1.17)

(1.18)

where the initial condition TQ(X) is assumed to be compactly supported. This equation plays an important role in the analysis of turbulent diffusion. It is used for example in the analysis of the temperature field of incompressible fluids. See for example [16]. In [3], Avellaneda and Majda gave, a complete analysis of the following particular case. The dimension d is equal to 2 and the 2-dimensional velocity field of the form:

v(x,y) = 0 v(x)

where we use now the notation (z,y) G Dt2 for the space variable. In this case, the equation (1.17) becomes:

dTit.x.y) , xdT(t ,x,y) A m. dt dy

The authors also assume that the function v(x) is a realization of a stationary mean zero Gaussian process with a spectrum of the Kolmogorov type. It is not difficult to show that the solution Tit^x^y) satisfies:

\T(t,x)y)\<Cle-c^^2+\y^

for some positive constants c\ and c<i = C2(t). One can thus consider the two-sided Laplace transform in y, namely the function T\(t}x) defined by:

fx(t,x)= f+0°T(t,x,y)exydy. J — oo

Then, the parabolic equation for T(^,a?,y) becomes the following equation:

dfx(t,x) d2fx(t,x) dt

with the initial condition:

= A C -8x* + (\'-\v(x))Tx(t,x) (1.19)

TA(0,*)= [+0°T(Q,x,y)exydy.

Page 22: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 13

Again the transformed equation is a parabolic Anderson problem with a random po­tential. The homogenization for equation (1.17) can be reformulated in terms of this new equation as the asymptotic analysis of the solution of (1.19) for t —• oo and A —> 0.

I n t e r m i t t e n c y We shall show that a solution of equation (1.1) exists and is unique under very general conditions. The solution ii(f,x) is a homogeneous ergodic field in x for each fixed value off but is not homogeneous in t. In fact, the field and its moments have non trivial exponential behaviors as t —» oo. This behavior can be interpreted in terms of the notion of intermittency. The latter refers to unusually large fluctuations of the field. In the language of particles this means that the distribution of the particles following the dynamics given by (1.17) is highly nonuniform. For large values of the time t there exist spots where the concentration is very high, the distances between these spots being very large, and most of the mass of our ensemble of particles being concentrated in these spots. The mathematical notion of intermittency is defined as follows. Let us assume tha t the p-th moment of the solution exists, call it m p ( t ) , and tha t the limit:

7 P = lim - l o g m p ( t ) t—+oo t

exists. This limit is called the p-th moment Lyapunov exponent of the solution. It depends on the parameters K, r , . . . of the problem but we shall not consider this dependence in the present discussion. We say that the family of homogeneous fields {ii(tf,x); x £ TLd} is intermittent when t —• oo if one has:

2 p

It is not difficult to check that one always has:

7 . < | < . . . < * £ . . . 2 p

Moreover, we shall also prove that , if one of the above inequality is a strict inequality, then all the subsequent inequalities are also strict. This proves tha t intermittency is in fact proven when the first inequality is shown to be strict. Since it may not be com­pletely obvious that the above mathematical definition actually captures the physical phenomenon which we described earlier, we spend some time to shed some light on the implications of our mathematical definition. Most of the ideas are borrowed from [26].

Page 23: Parabolic Anderson Problem and Intermittency

14 RENE A. CARMONA AND S.A. MOLCHANOV

There is another parametrization of equation (1.1). It corresponds to another physical regime. Let us do the substitution s = nt and let us set u ( s , z ) = U(S/K,X). Then the function v(syx) satisfies the parabolic equation:

— = Av + - 7 = 6 ( 5 , x)v(sy x) OS y/K

where the new stochastic field £i(s, x) is defined by:

1 s y K AC

If we consider that the random field £1 is given, the new problem can be rewritten in the form:

f | f = An+ v^6(*,*)«(<,*) ,T 9 f n

1 «(0, •) = 1. (i ^ Notice that the covariance of the new mean zero Gaussian random field is given by:

( 6 («,*)&(*, y)> = - r 0 ( — - ) T i ( x - y )

This form is particularly important because of the dimensionless character of the constant n. We shall have to study both problem (1.5) and problem (1.20). They are both of physical importance because they correspond to two different regimes.

S u m m a r y of t h e Resu l t s We close this introduction with a short summary of the contents of the present work. Chapter II is devoted to the proofs of existence and uniqueness of the solution of (1.1). This part is very much in the spirit of [10] where the stationary case was studied. It is important to recall that the word stationary is given the meaning it has in the theory of dynamical systems: the coefficients are time-independent. We first assume that the time correlation of the potential field &(x) is given by a function. The equation can be studied path by path and the existence and uniqueness problems are approached via the Feynman-Kac representation of minimal solutions. We then consider the case where the time correlation of the potential field is a delta function at the origin. We reformulate the equation as an Ito stochastic integral equation driven by an infinite dimensional Wiener process. We also consider the case of the corresponding Stratonovich integral equation. These equations are easily solved in the appropriate weighted Hilbert spaces. Next, we give a diffusion approximation result

Page 24: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 15

which shows that the problem of the delta-time correlation appears as a natural limit of more realistic physical situations. We close this chapter with a proof of the existence of moments of all orders for the solutions of equation ( I . l ) . We also derive a set of closed equations for the moments in the case of a delta-time correlation. They read:

dmp{i'Xlm'"'Xp) =K[AXl + ... + AXp + V(Xl,...,xp))mp(t,xly...,Xp) (1.21)

where the potential function V(x\, • • •, xp) is given by:

Vp(zlr--,zp)= Yl r i ( * . ' - * i ) (1-22) i<»<i<p

and where I \ is the spatial covariance of the random field £t(x). It is interesting to notice that these equations are simply heat equations for a multiparticle Schrodinger operator. Chapter III is devoted to the proof of the existence of the moment Lyapunov exponents 7P(AC) and to their quantitative analysis. The existence is proved in great generality. In the white noise case, these exponents are characterized as the spectral radii of the above multiparticle Schrodinger operators. Some classical arguments from spectral theory and simple perturbation arguments give the properties of JP(K) as functions of the order p of the moment under consideration, of the diffusion constant K and of the dimension d. We also give detailed asymptotic formulas for 7P(«) both as K —• 0 and K —* oo. In any case, we prove tha t full intermittency holds for all values of K in dimensions d — 1 and d — 2 while a similar result depends on the values of K in higher dimensions. Indeed, we prove that JP(K) vanishes for large values of K if d > 3. We view this result as a phase transition in the spirit of the theory of localization for disordered Anderson models. Finally, we consider the general case of a non-delta correlated potential field &(#). We mostly study the Lyapunov exponents as functions of the coupling constant a. We prove similar asymptotic estimates of 7p(cr) both for a large and for a small. Finally we investigate the intermittency of the solution field as before.

Chapter IV is devoted to the proof of the existence and the investigation of the be­havior of the almost sure Lyapunov exponents 7(«) . The existence of the almost sure exponents is obtained (as usual) from the subadditive ergodic theorem. We were un­able to prove the existence for general bounded initial conditions UQ(X). We have to restrict ourselves to initial conditions with bounded supports . We show that the value

Page 25: Parabolic Anderson Problem and Intermittency

16 RENE A. CARMONA AND S.A. MOLCHANOV

of the exponent does not depend on the initial condition and we determine the asymp­totic behavior of J(K) as K —• 0. The proofs of the lower bound and the proof of the upper bound are given in separate sections. These estimates are the main technical results of the whole analysis. The proofs are rather involved. They are more difficult than the soft analysis proofs of the estimations on the moment Lyapunov exponents. But they are of crucial importance because they shed light on the phenomenon of intermittency and the presence of very high picks separated by large distances. We summarize in a short and last chapter the analogies of the results proved in this monograph for the time dependent random parabolic problem and the results and/or predictions of the Anderson's theory of (time independent) random Schrodinger oper­ators.

Page 26: Parabolic Anderson Problem and Intermittency

Chapter II

EXISTENCE AND UNIQUENESS PROBLEMS

This section is devoted to the discussion of the existence of solutions of the equa­tion (1.1). We show existence and uniqueness of a solution under various con­ditions on the random potential {£*(#); t > 0, x G 2Zd}. For a large class of potentials f the equation (1.1) has a mathematical meaning for each realization of the random field £t(x) ar*d it can be solved by first fixing the sample path of £ and then treating the problem as a deterministic one. For this reason we first consider the case of a deterministic potential. We introduce a specific set of notations and we review the classical results which we use. We then consider the more general random case.

IL1 The Deterministic Problem

Some of the results concerning existence and uniqueness of a solution for the random parabolic equation (1.1) are obtained for each fixed w G 0 from results on deterministic equations. This is especially the case when the random potential &(x) is a bona fide function of t for each x and u>. The limiting case of the white noise in time will have to be treated by different methods.

17

Page 27: Parabolic Anderson Problem and Intermittency

18 RENE A. CARMONA AND S.A. MOLCHANOV

II. 1.1 Th e Feynman-Kac Representat ion

It may seem paradoxal, but we choose to use probabilistic techniques to study the deterministic version of (1.1). To this effect, we introduce a specific set of probabilistic notations. It deals with the continuous time nearest neighbor random walk on the lattice TLd. This process can also be regarded as the lattice version of the process of Brownian motion because it is a Markov process with independent increments and because its infinitesimal generator is the discrete Laplacian A defined in (III.12).

We let W = W ^ ) be the space of right continuous functions with left limits from Ht+ into TLd. For each t > 0, we denote by Xt) the coordinate function W 3 w —• Xt(w) = w(t) S 2Zd. We use the notation Wt — W t for the smallest cr-field of subsets of W for which all the the X 5 ' s for 0 < s < t are measurable. The simpler notation W will be used in the case t = oo. For each s > 0 and x £ % we denote by P ^ the probability measure on ( W , W) of the Brownian motion or continuous time (nearest neighbor) random walk on TL starting from x at time 0. The corresponding expectation will be denoted by "Ex. Remember tha t under JPX) the process {Xt\ t > 0} is at x at time 0, it stays there for an exponential holding time, and then jumps at one of the 2d neighbors of x in 2Z with equal probabilities. It stays there an exponential holding time independent of everything else, and jumps again to one of the 2d neighbors with equal probabilities, . . .

We did not talk about the value a of the parameter of the exponential distribution of the time spent at each site of the lattice. A convenient choice will b e a = 2nd.

For each t > 0 and w £ W we denote by N(i, w) the number of jumps of the path w before time t. {N(t)] t > 0} is a Poisson process with parameter a = 2/cn independent of the sites of 7Ln visited by the path. The operator HQ = K A is self-adjoint and bounded. The operator semigroup generated by HQ is given by:

[etH°f](x) = Vx{f(Xt)}

and, for each function UQ(X) on the lattice TL , the function w(t,x) = JEx{uo(Xt)} satisfies:

Yt = H°u ( I U ) and the initial condition u(0yx) = UQ(X). We shall prove a more general fact below.

Page 28: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 19

We now investigate the parabolic equation (1.1) with a deterministic potential £t(x). So, let £t(x) be a real valued function defined on IR+ x TLn. The classical Feynman-Kac perturbation theory of parabolic equations says that the function u(t,x) defined by:

u(t,x) = JEx{MXt)ef°*t-'(X')ds), (II.2)

is the right candidate for a solution. Here, the initial condition UQ is any nonnegative function on the lattice 7Ln. We are about to show that this is indeed the case, but first, we need to rewrite the integral in the exponent in a different way. We shall use a special set of notations which are justified by the following consideration. One would like to solve equation (1.1) when the potential &(x) is not a function but merely the derivative (with respect to t) in the sense of distributions of a function. This is the typical case of a random potential of the white noise type.

11.1,2 Special Notat ions

For each function & of the real variable t we denote by & the antiderivative which vanishes at 0, i.e.

6= ftsds. (113) Jo

We also consider a function Ct{x) on IR+ x TLn and a piecewise constant function Xt in 2Zn. One can think for example of a sample path of the random walk. Let us denote by 0 = To < T\ < T2 < the successive times of jump and by Xo, X\, X2, the successive sites visited by the random walk. In order to avoid the technical problems created by the time reversal we shall not make any assumptions on the values of the function Xs at the times of jumps, i.e. for s = Tj. In other words we only assume that:

Xj = Xu te(TjtTj+1), j = 0,1, •

Under these conditions we set:

f dUXs) = E^H1M(XJ) - CTMXJ)' ( IL4) 0 i>o

In particular, with this notation we have:

Page 29: Parabolic Anderson Problem and Intermittency

20 RENE A. CARMONA AND S.A. MOLCHANOV

f £s(X*)ds = I dl(Xs) (II.5) Jo Jo

and also:

ft ftATj+i / tt-a(X9)d8 = Yl / tt-(Xj)d* Jo j5oJtAT> j>o% *A'r'

j > 0

r ^_5(^s). Jo

There are some inconvenience with the notation (H.4). In particular, even though we obviously have:

['Zt-.{Xs)ds = ['uXt-s^s Jo Jo

we only have:

/ dCt-s(Xs)ds = - / dCa(Xt-9)ds. Jo Jo

II. 1.3 Exis tence Problems

We now come back to the problem of the existence of a solution of equation (I.l). We assume that we are given a function (t(x) which is continuous in t for each fixed x €%n

and a nonnegative function UQ(X). Ct{x) ls the antiderivative (with respect to the time variable t) of the potential function £t{x) appearing in equation (I.l) and UQ(X) is the initial condition. We consider the function u(t, x) given by the Feynman-Kac formula:

u{t,z) = JEx{u0(Xt)etid<s(Xt-s)}, (II.6)

and we assume that the expectation defining u(t,x) is finite for all t > 0 and x £ TL . Notice that the quantity f£ d£s(Xt-s) is an additive functional in the sense that:

Page 30: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 21

rt+h tt rt+h / dCs(Xt+h-s) = / dCs(Xt+h-s)+ dCs(Xt+h-s)

Jo Jo Jt = (J* dc.(x«_.)) o eh + J*+h dc3(xt+h.a)

where 0^ denotes as usual the time shift. The Markov property gives:

u(t + h,x) = E a :{ti(t ,X/i)c//+ h < i c ' ( X t + f c- )}.

and consequently:

u(t + h,x) = Ex{u{t,Xh)eft *UXt+H-).h< T l j

= u(t,a:)eCt+ , l(x)""<:t(a:)e~2'cn/l

\y-x\=l

and:

ti(t + /i,x) — i/(t, x) = ti(t,x)c^+fcW"CtW[c-2/8nfc - 1]

+ u(f, x)c"CtW[c^+fcW - eCt^] (II.7)

\y-x\=l

Let us consider the problem of the continuity of the function u(t,x). In order to do so we assume that t > 0 and x G 2Zn are fixed, and we let the time variable varies while the space variable is kept fixed. In other words we consider, for t and x fixed, the limit h —> 0 in the equation (II.7) above. It is clear that, because of our continuity assumption on Ct{x) the first two terms of the right hand side go to 0. The convergence toward 0 of the third term is equivalent to the convergence toward 0 of:

E x

fh {u(t + h-Tliy)\h>Ti}= u(i + s)e

Jo

Page 31: Parabolic Anderson Problem and Intermittency

22 RENE A. CARMONA AND S.A. MOLCHANOV

which is clear by our finiteness assumption.

Remark: When the initial condition UQ(X) is nonnegative it is interesting to notice tha t the function u(f ,x) defined by formula (II.6) is most of the time unbounded, even when UQ(X) is bounded. Indeed, the obvious lower bound:

ti(*,x) > JEx{u0{Xt)efod<*(x<-°); N(i) = Q} (II.8)

= uQ{x)e^x^0^]Po{N(t) = 0} (II.9)

= u0{x)e(:t^-<0^e-2dKt (11.10)

is obtained by restricting the expectation to the paths which do not j ump before time t, i.e. the paths which leave x after time t, and this lower bound shows that , for fixed t > 0, the function u(t,x) can be unbounded even when UQ(X) is bounded, as long as the increment of £ are unbounded. We shall come back to this elementary remark in several occasions.

We now consider the problem of the differentiability with respect to t of the function u ( t , x ) . In order to do so we divide both sides of the equation (II.7) by h. Obviously the first term goes to — 2Knu(t,x) when h —• 0. The first factor in the expectation converges to 1 and is uniformly bounded because Ti < h. Since ti(t, x) is continuous in t for each fixed x, the expectation is easily seen to converge (after division by h) to nAu(t) x) + 2Knu(t, x) because h~llP{h > T\) converges to a = 2nn. The problem reduces to the convergence of the term:

e - C t o f —1 . ( II . l l ) h

Notice that , in the particular case of a differentiable function one has:

cC,+ h(x)_ eC,(*) = ft^)eC.(x)+o(A)

at which shows that (II . l l ) converges to dC,t{x)/dt. We summarize these facts in a Proposition.

P r o p o s i t i o n II .1 .1 Let us assume that the function O(^) is continuous in t for fixed x and that the right hand side of formula (II. 6) is finite for every choice of t and x. Then the limit:

Page 32: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 23

u(t + h,x) - u(t,x) hm — 7 -h\o h

exists if and only if the limit:

. .cCt+h(») _ ett(x)

«,(,) = Jjm.-W) -exists and in this case the function u is solution of the parabolic equation:

— = KAU + £U. ot

Obviously,

whenever £ is differentiable in / with a locally bounded partial derivative.

The following corollary will play a crucial role in the sequel.

Corollary 11.1,2 Let us assume that the function &(s) is measurable in t for fixed x}

that UQ(X) > 0 for all x £ 2%r and that the right hand side of formula (II.6) is finite for every choice oft and x. Then the function u defined by formula (II. 6) is the minimal solution of the parabolic equation:

— = KAU + £U.

Proof: The fact that u is a solution is an immediate consequence of the above proposition. In proving the second claim, we give a stochastic analysis justification of the Feynman-Kac formula. This will explain why this formula appears naturally in the study of equation (1.1) with a potential £t(x) given by a function. Let us assume that the function v(t, x) is continuously differentiable in t for each fixed x G TLn'. Let us assume that t > 0 and x G % are fixed and that XQ = x. Then, for every s £ (0,/) ordinary calculus gives:

v(t-s,Xs) = v(t)x)- (9 ^(t-a,Xa)da+ T [v(t - a,Xa) - v(t - a,Xa-)].

Page 33: Parabolic Anderson Problem and Intermittency

24 RENE A. CARMONA AND S.A. MOLCHANOV

It is natural to compensate the sum of jumps in the right hand side by adding and subtracting its compensator. Consequently, if we denote by v the Levy measure of {Xt\ t > 0}, we get:

v{t-8,X8) = v(t,x)-J'-£(t-a,Xa)da

+ J J[v(t - a,Xa- + y) - v(t - atXa-)]dav(dy) + M3

= v(t,x)+ fS [—{t-a.X^ + KAvit-a.Xa^da + Ms (11.12) Jo+ ot

where Ms is a local martingale. Let us now set:

v(t - s, Xs) = Vs + Ms and As = / £t-a{Xa-)da. Jo

Using formula (11.12) and integration by parts (for semimartingales) one gets:

v(t-8,X8)eA* = v ( t ,x)+ [* v(i-a,XQ)d(eA«)+ / * eA* dav(t - a,Xa) Jo Jo

+ [v(t-s,Xs),eA']

= v(t,x)+ fSv(t-a,Xa)Zt-a(Xa-)eA°)da+ [' eA" dVa Jo Jo

+ [' eA*dMa. (11.13) Jo

Notice that Ms = f£ eAa dMa is another local martingale. Consequently, if we now assume that v{i}x) is a solution of our fundamental equation (I.l), then we have v(t — a,X a_)£*_ a(X a_) + dtVa = 0 and formula (11.13) can be rewritten as:

v{t - 5, Xs)eA> = v(*, x) + Ms (11.14)

which is a local martingale in s G (0,t). Consequently, for a sequence {Tn; n > 1} of stopping times increasing to t we have:

v(t,x) = JEx{v(t - Tn,XTn)efon^^)da}

and, if we also assume that v is a nonnegative solution, we can use Fatou's lemma in the limit n —• oo and get:

Page 34: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 25

This completes the proof. I

Remark : It was proven in [10] that if the potential function £t(x) is stationary (recall that this means that the function &(x) is actually independent of the time variable t) and if it is not percolating from below (in a sense given in [10]) then the right hand side of formula (II.6) is actually finite for every t and x. Consequently, if one can prove uniqueness of the solution of equation (LI), the whole existence-uniqueness problem reduces to proving that the expectation in formula (II.6) makes sense for all t > 0 and I G T . This solves the existence problem in terms of properties of the potential function £ (at least when the latter is time independent) instead of the sufficient condition involving the finiteness of the expectation defining u(t,x) in formula (II.6).

We now consider in detail the problem of the uniqueness of the solution of equation (1.1).

IL1.4 Uniqueness

It is possible to prove uniqueness of the solution of the parabolic equation (1.1) in classes of functions whose spatial behavior at infinity is at most exponential. In this respect it is convenient to introduce weighted Hilbert spaces 7iw = £^(Z5 ) and to check that the solution uit^x) given by the Feynman-Kac representation (II.6) does belong to some of these spaces. We postpone the definition of these spaces for we shall not use them now.

But because of our special interest in nonnegative initial conditions UQ(X) we shall first consider the uniqueness problem in the class of nonnegative solutions u(t, x). We shall assume that the potential function £t{x) satisfies the following growth condition: for each T > 0 there exists a positive constant CT for which:

Kt(*)l <cr^ / i + iog+ |x|, zexd, o < t < r . (11.15)

We shall see later that, under a mild condition on the covariance function, this con­dition is almost surely satisfied by the sample realizations of typical homogeneous Gaussian fields.

The uniqueness is proven by first considering the problem in finite domains. In this case the problem can easily be solved for the solution has a nice representation in

Page 35: Parabolic Anderson Problem and Intermittency

26 RENE A. CARMONA AND S.A. MOLCHANOV

term of its boundary values and a Poisson-type kernel. The general case can then be obtained by a limiting procedure in which the finite domains extend to the whole lattice TL . For each r > 0 we consider the cube A r defined by:

Ar = {x € TLd\ \x\ < r}

and we denote by rr the first exit time of the random walk, i.e.

7> = inf{t > 0; Xt e dAr}

where we used the notation:

0Ar = {xe TLd\ | s | = r}

for the inner boundary of the set A r . The Markov property gives:

u(t,x) = JEx{u0(Xt)eIo^x^ds]t<Tr}

TEx{u{t - r r,XTr)e^-r r6(^-^5

; Tr < t} (11.16)

Let v(t,x) be a solution of (LI) which satisfies the same initial condition as u, i.e. such that v(Q)x) = UQ(X) for all x £ TL . The limiting procedure is controlled by an estimate of the Poisson kernel. The latter is given by the joint distribution 7r£(y,cft) of the time and the location of exit from A r , i.e.

7Trx{y)dt) = lPx{XTr = y , Tr£dt]

and more precisely in its asymptotic behavior as r —» oo. Using the independence of the times of jumps and the sites of the lattice visited by the random walk and using standard properties of Poisson processes one gets:

L e m m a I I . 1 . 3 The exist distribution 7r£(y, dt) has a density 7r£(y,i) given by the formula:

<(y.<) = Ej^Yy^^r^ri^y) ("-17) k>r ^ '

where Nr(x)y) denotes the numbers of paths 7 of nearest neighbors sites such that 7 ( 0 ) = x, | 7 ( 1 ) | < r, • • • , |7(* — 1)| < r and 7(fc) = y.

This density can be estimated in the following way:

Page 36: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 27

Lemma II.1.4 i s r - » o o one has:

t r _ 1 (eK)rre~r l o g r"Si<i<d lw:l los(lvjIA) <{V,t) ~ ( 2 x ) - / ' V ( l V | y i | ) - ( l V | W | )

uniformly in t restricted to a bounded interval for \y\ = r.

The proof of this estimate is based on the fact that the main contribution to the infinite sum of (II. 17) comes from the first term corresponding to k = r and repeated use of Stirling's formula. We omit the details.

Proposition II. 1.5 Uniqueness of nonnegative solutions of equation (LI) holds when­ever the potential function &(z) satisfies condition (11.15).

One has to prove that, if u(t,x) is a nonnegative solution satisfying u(0,x) = 0 then u(t,x) is identically zero. The proof uses the result of Corollary II.1.2 and the above estimate on the exit density 7r£(y,tf). We do not give the details because the procedure is quite standard.

II.2 The Random Case

IL2.1 Set t ing of the P r o b l em

We assume that {Ct(x)\ * > 0, x £ 7Zd} is a mean zero Gaussian field defined on a com­plete probability space (Q,^7, P ) . We shall use the notation ( • ) for the expectation of a random variable over this probability space, i.e.:

(*) = / $(u;) dP(u>). Jn

The distribution of the Gaussian process £ is determined by its covariance function. We shall assume that it is the tensor product of the covariance of a Gaussian process on the half line [0, oo) by the covariance of a Gaussian field on 7L . In other words, we shall assume that its covariance is of the form:

n(s,x),(tty)) = rQ(s,t)rl(x,y). (11.18)

We shall always assume that the spatial covariance T\ is homogeneous, i.e.

r 1(x,y) = r 1 ( x - y ) .

Page 37: Parabolic Anderson Problem and Intermittency

28 RENE A. CARMONA AND S.A. MOLCHANOV

We shall say tha t £ is a spat ial w h i t e noise , or that it is a white noise in space if:

I*i(x -y) = 6Q(x-y).

When the Gaussian process £ is the random potential £t(x) appearing in equation ( I . l ) , we shall also assume tha t the t i m e covariance To is homogeneous in the sense tha t it is a function of the difference s — t, i.e.

r0(M) = ro(s-*) and we shall say that £ is a t i m e w h i t e noise if:

r0(fi,*) = « o ( « - * ) -

It is well known that a rigorous mathematical definition of such a white noise process requires some extra work. Our way out will be to consider £ as the time derivative of a (one parameter) Wiener process with values in an infinite dimensional space. This interpretation is very natural for the study of the stochastic equation ( I . l ) . Indeed, the latter can then be interpreted as an Ito equation in infinite dimensions. In any case, we shall consider the time antiderivative:

C«(*)= [*Ux)d8 Jo

because it always makes sense.

We shall say that £ is a space - t ime white noise if it is a white noise both in time and in space. In the sequel, the only case of interest for which the random potential £t(x) does not satisfy the assumption of time stationarity is not satisfied is the piecewise constant approximation of white noise for which:

r i ° (M) = *>([«/«]-[<AD where we use the notation [ • ] for the integer part of a nonnegative real number and where e is a fixed positive number. The time evolution of such a process is given by the following simple description: it is equal to a Gaussian field {£o(x)] x £ ^ } w ^ h covariance Ti as long as 0 < t < e, then it is equal to a Gaussian field {£i(z); x £ ^2 } which is independent of the previous one and has the same distribution as long as e < t < 2e, then it is equal to a Gaussian field {£2(2)] % £ 2Z } which is independent of the previous ones and has the same distribution as long as 2e < t < 3e, It

Page 38: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 29

is clear that such a process gives a good approximation of a time white noise when the parameter e is small. This fact is physically obvious. We give a mathematical justification below. Even though we shall refrain from checking all the necessary details, it is not difficult to see that all of the results which we state and prove in the homogeneous case remain valid in the case of this piecewise constant approximation of white noise.

II.2.2 Continuity of the Feynman-Kac Formula

Throughout this section we assume that {G(#); ^ 0 , x 6 22} is a mean zero Gaussian field with a covariance function given by a tensor product. As we explained above, £ should be interpreted as the antiderivative £ of the random potential £. As such it should not be expected to be homogeneous in time. We assume that its covariance r ( (s , x), (t, y)) is of the form:

(C.(*)C«(y)) = f((s ,x),( t ,y)) = f o ( M ) r i ( * - y)

where the time covariance To is determined by a measure /i on the first quadrant [0, oo) x [0,oo) of the (s,tf)-plane in the sense that:

fo(a)O = /i([0,«]x[0,<]).

Notice that the measure \x is necessarily symmetric with respect to the diagonal of the first quadrant. In the particular case (j,{x) = £t(x) where £ is the (time) antiderivative of a homogeneous field £t(x) with covariance T = To ® Ti then one has:

(C.(*)Ct(y)) = I* f\u*)tp(y))dcxdp Jo Jo

= r / r 0 ( a - /?) da dp Ti(x - y) Jo Jo

so that:

f o ( M ) = f f To{cc-(3) da dp. Jo Jo

Notice that, in this particular case, one has fi(ds, di) = To(s—t)ds dt and consequently, /i is invariant under the shifts parallel to the first diagonal in the sense that:

H(A) = AI((*,<) + A), t > 0, A e 3[0 ,oo)x[0,oo) (11.19)

Page 39: Parabolic Anderson Problem and Intermittency

30 RENE A. CARMONA AND S.A. MOLCHANOV

This shift invariance is a reasonable assumption which can be used to generalize the results beyond the case of the time antiderivative of a time homogeneous field. Note also that, still in this particular case, one has:

\fi\([0,t]x[0,t))<tJ\To{r)\dT.

Recall that the covaraince function T\ is nonnegative definite and consequently, it is bounded and:

Ti(0) = sup{ri0c); x G } . (11.20)

We first check that, for every t > 0 and every x £ 2Z the Feynman-Kac expectation:

E,{e/o «•<*•->} makes sense for almost every realization of the random field £. The following simple calculation will be of crucial importance.

(Er{e/o "«*"•>}) = E,{(e£ «•<*->)}

The expectation in the exponent can be computed explicitly because, for each fixed sample path of the continuous time random walk, the random variable fQ d£s(Xt-s) is Gaussian with mean zero. Moreover:

( f dC(Xt-.) f dUXt-s)) = £ ([C(t-Th)AXh) - C(t-Th+1)Va(Xh)] Ja Jc h,k>0

[C(t-Tk)*d{Xk) - C(t-Tk+l)Vc(Xk)])

= J^ r i ( ^ - **)/*([(* - Th+i) A 6, (i - Th) V o) h,k>0

x [ ( t - T f c + i ) A d , ( < - r i k V c ) )

= / / r^Xt.tx -xt.t3)ii(dtltdt3). (n.21) Ja Jc

which gives:

Page 40: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 31

(Qf* dC.(*t- .)) ) = f f r i ( ^ - t i - Xt-t*) dfi(dh,dt2). (11.22)

The final result is stated as a lemma for future reference.

Lemma II.2.1 Let us assume that the covariance V of the mean zero Gaussian field {Ct(x)'i t > 0, x G 27*} is a tensor product F = To ® Ti where To is given by a symmetric measure \i on the first quadrant [0,oo) x [0,oo). Then, for every t > 0, x G %? and p > 0 one has:

(Ex iepfo «.<*—>|) < eP2ri(o)|H(MxM)/2 t ( n 2 3 )

The first consequence of this apriori estimate is the existence, for each fixed (t,x) G [0, oo) x TL of a random variable:

u(*,*) = E*je/o *•(*«-•>}.

on the probability space (0 ,^*,^) on which the random field £ is defined. We now show that the family of random variables so defined has a continuous version. This will be a consequence of the crucial estimate which follows:

Lemma II.2.2 On the top of the assumptions of the previous lemma we assume that, there is a positive constant e > 0 such that, for every T > 0 there exists a constant CT > 0 such that:

/i([t,t + h] x [Q,T])| <c T / i ( 1 + e ) / 2 , t,h> 0, t + h<T. (11.24)

T/ien, /or each fixed K > 0 and T > 0 J/zere exists a constant c = C(K, T) w/uc/i depends on To on/y through the constant cj and for which:

{\u(t + h}z)~ ti(t,x)|4) < cri(0)2ft2 (11.25)

for all x e l&tt^Q and h>0 such that t + h<T.

Proof: We first write:

Page 41: Parabolic Anderson Problem and Intermittency

32 RENE A. CARMONA AND S.A. MOLCHANOV

u(t + h,x)- u{t,x) = Ex LlodUX,+h-,)[eft,+hdCs(XHh.s) _ j j j

+ E X | e / 0, ^(^- , ) [ e / 0 *<i<: . (^ + h - . ) - / 0 ' ^ (^ - . ) _ j A .

Consequently we have:

(|u(< + h,x)- u(t,x)\4) < 8(|lEx [e/o «•<*•+*-)[c/,*+*«•(**+*-) - 1 ] | | )

+ 8 ( | E X [e/o *«(*'-'>[e/o<K»(*«+*-')-/o *<•(**-•> _ l ] | | )

= 8 ( |E I {e^ (e B l - 1)}|4) + 8( |E x{e^(eB* - 1)}|4>,

where we set:

ft rt+h Ai=j d(s(Xt+h„s) and Bi = Jt

dUXt+h-s),

and:

A* = f dCs(Xt-s) and B2 = I* dC(Xt+h.,) - I* dC{Xt-.). Jo Jo Jo

Each term is controlled by the estimate:

{\Ex{eA(eB - 1)}|4) < Ex{(e1 2 A)}|1 /3E{((eB - l ) 6 )} 2 / 3 . (11.26)

which follows easily from Holder's inequality. Indeed, Lemma II.2.1 above can be used to control the first factors of the right hand side of (11.26). In both cases (i.e. for j = 1,2) one gets:

Ex{(e 1 2 A ' )} | 1 / 3 < e^x(o)Mao,T\x[o,T\) ( n 2 ? )

for all t > 0 and h > 0 as long as t + h < T. The second factor of the right hand side of of (11.26) is controlled by the estimate:

( ( e B - l ) 6 ) = ei8<£2>-6e25<B2>/2 + 15e8<B2>

- 20e9<B2>/2 + 15e2<B2> - 6e<B^2 + 1 = c0(fi2)3 (11.28)

Page 42: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 33

which holds for some CQ > 0 and any mean zero Gaussian random variable B as long as (B2) is bounded. In the first case, the argument proving formula (11.21) gives:

(52)<r 1 (o) | / i | ( [ f , t + / i ]x [ t ) t + ft])1

which gives:

Ex{((eBl - l ) 6 ) } 2 ' 3 < coT1(0)2\n\([t,t + h] x[t,i + h])2

< coCTTiiOfh1*'. (11.29)

The second case is treated as follows. Using (11.21) and (11.22) we get:

(B2) = (^daxt+h^) + ((J*d<;3(Xt-s))

-2((j*duxt+h_^ (jf'c*a*<-,)> — / / [Fl(Xt+h-si — Xt+h-s2) ~ Wi(Xt+h-si — Xt-si)

JO JO + Ti(Xt.Sl - Xt-S2)]n(dSl,ds2)

< 2rx(0) / / (1 A \Xt+h.Sl - Xt.Sl\+ 1 A \Xt+h.S2 - Xt.S2\) /x(dsi,ck2) Jo Jo

where we used the obvious bound:

\V1(x)-T1(y)\<2r1(0)(lA\x-y\).

Since the paths Xs change only by jumps of size 1 we have:

(Bl) < 4r!(0)c rAr(t + h)hSl+cV2.

This gives:

Ex{((e f l2 - l ) 6 )} 2 / 3 < 16ri(0)24E{AT(r)3}2 /3 / i1 + £ . (11.30)

Putting together (11.27), (11.29) and (11.30) gives the desired result. |

Page 43: Parabolic Anderson Problem and Intermittency

34 RENE A. CARMONA AND S.A. MOLCHANOV

The first important consequence of the estimate (11.25) is that the stochastic field {ti(t, x); t > 0, x G 22d} has a version with continuous sample paths. Henceforth we shall assume that we are working with this specific version.

We now come back to the lower bound (II.8) which was used to prove that the Feynman-Kac formula could give an unbounded solution in the deterministic case. We derive a similar lower bound with a slightly different argument. Jensen's inequal­ity gives:

where:

TH(z) = &*{[*<!&&-)} Jo

is a mean zero Gaussian field with covariance:

< i;.(*)»fc(y) > = J J E 0 ® Eo{r!(x - y + x\% - x\%) dfi(dtudt2)

where X^ and X^ are independent copies of the continuous time random walk on the lattice 7Ld. It is then easy to see that, under mild conditions on the measure H(dti)di2) giving the time covariance and the spatial covariance Ti, that the Gaussian field rjt(x) satisfies:

IPQiminf , Mx)l >Q} = 1. N-oo ^ l + log+ |* |

See for example LemmaII.2.4 below for a similar result. This implies that the solution w(t,x) given by the Feynman-Kac formula is unbounded if the initial condition is constant over the lattice.

The estimate (11.25) was used in conjunction with Kolmogorov's criterion to give the existence of a continuous version of the Feynman-Kac formula. It can also be used to give the following tightness result.

Page 44: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 35

Lemma II.2.3 Let us consider a family of mean zero Gaussian random fields \Q {x)\ * > 0, x £ 2P1} parametrized by e > 0 such that, for each e > 0 the covariance T^ of Q (x) is given by a tensor product T^ = IQ ® I j w/iere IQ 25 ^'uen by a symmetric measure ^ on the first quadrant [0,oo) x [0,oo). We assume that:

supr[e)(0) < oo o o

and that assumption (11.24) ^s satisfied for each e > 0 with a constant CT independent of e. Then the family of distributions of the corresponding family {u^(t,x)\ t > 0, x £ 2^} of solution random fields is tight.

We shall use this fact in several instances and in particular in the case when:

r f >(*) = rx(«) for a fixed (bounded) spatial covariance Ti and when the dependence upon e is obtained through a scaling of the time variable t.

IL2.3 The Case of a Homogeneous Potential Field

As before we assume that {£t(x)] t > 0, x £ % } is a mean zero homogeneous Gaussian field with a covariance function r = To ® Y\ given by a tensor product and such that To is integrable. We shall use the same notation as in the previous section.

We showed that the random function ii(t,x) defined by the Feynman-Kac formula (II.6) is a solution of the random parabolic equation (1.1) for IP-almost all u) £ fi. Indeed, we proved that u(t,x) so defined is almost surely a continuous function off and consequently, Proposition II.l. l gives the desired result. This solves the problem of the existence of a solution. The uniqueness problem is solved by proving that the random potential £*(#) satisfies almost surely the condition (11.15) under which we proved uniqueness of the solution. We proceed to the proof of such a property of the random potential. The proof of the following estimate is standard in the theory of homogeneous Gaussian fields and the result we prove is presumably known. We include it for the sake of completeness.

Lemma II.2.4 If the homogeneous (mean zero) Gaussian field {&(z); t > 0, x £ SB?1} satisfies the above condition and the time covariance To is Holder continuous at 0 in the sense that there exists a 8 > 0 such that for every T > 0 one has:

| r 0 ( t ) - r 0(o) | <c'Tt8, o<t<T,

Page 45: Parabolic Anderson Problem and Intermittency

36 RENE A. CARMONA AND S.A. MOLCHANOV

then for every T > 0 we have:

P{ sup -jJM^L=<oo} = l (11.31) 0<*<T, xe2Zd yjl + l o g + |x |

Note that the assumption on the covariance implies the existence of a version of £t(x) which has continuous sample paths. We shall not need this fact in this section though.

Proof: We fix T > 0 and for each integer n > 1 and x £ 7L we define the event AnyX by:

AnyX = { sup \Z(k+i)T2-»{*) ~ &T2-»(*)I > qnj2T1{0)pJl + \og+\x\} A:=0,..-,2n-l V V

where the constants 0 < q < 1 and p > 0 will be chosen later. If £ denotes any standard AT(0,1) random variable we have:

gwP>/l + log+ |x |

— /r* 7T\\ in* /nnn — n \ * JP{Anx) < 2 n P{K | - . , x 7 r

< 2nP{|£| > T-s2nSqnPyJl + log+ |«|}

< c' exp[n log 2 - (2sq)2np2(l + log+ |x|)/2]

and consequently, as long as 2s g > 1, one has:

^ I P { A n f r } < c / c - ^ 1 + l o « + W ) / 2

n>l

and:

n>l,x€Zd

provided p is large enough. Here (and in the following) d stands for a positive constant the value of which may change from line to line. The first Borel-Cantelli lemma implies that at most finitely many of the events An,x can occur with probability 1. Now, if 0 < t < T we set:

1 = £1 + 12 £3 T 2 22 23

for a sequence ei, 62, 63, • • • of numbers in {0,1} and we write:

Ux) = to(x) + KC lT/2(*) " & ( * ) ] + K«aT2-*(*) - taT/2{*)] + KeaT2-*(*) - •

Page 46: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 37

Consequently one has:

M*)\ < Ko(*)l + ^ i + iog+N £ qn

l<n<oo

< M*)\ + cy/l + \og+\x\

for some random constant c = c(w). One can also apply the first Borel-Cantelli lemma to the JV(0,1) random variables £o(x) and show the existence almost sure of another random constant c\ for which:

Ko(*)|+ < ciy/l + log+ 1*1

for all x £ 2L . Putting together these two estimates completes the proof. |

Putting everything together we obtain:

Proposi t ion II.2.5 Let us assume that the homogeneous (mean zero) Gaussian field {&(x); * > 0, x G 2^} has a covariance T = To®ri of the tensor product type and the time covariance To is integrable and Holder continuous at the origin (in the sense given in the above lemma). Then the random parabolic equation (LI) has P-almost surely a unique nonnegative solution u(t,x) and this solution is given by the Feynman-Kac formula (IL6).

IL2.4 T h e Case of a W h i t e Noise Potent ial

We now consider the case of a potential process which is a white noise in the time variable. The interpretation of the stochastic equation (1.1) relies on the interpretation of (,%{x)dt as d^t{x) where \C,t[x)\ t > 0} is a Wiener process for each x. Indeed, if one rewrites (1.1) as a system of integral equations one gets:

u(t)x) = l+ K[AU(S, -)](x)ds + / u(syx)£s(x)ds Jo Jo

or equivalently:

Ti(*,x) = l + / K[Au(sr)](x)ds + J u{six)d(jS£s,(x)ds')

which can be replaced by the equation:

Page 47: Parabolic Anderson Problem and Intermittency

38 RENE A. CARMONA AND S.A. MOLCHANOV

u(t}x) = 1 + / K[Au(sr)](x)ds+ I u(s,x) o dCs(x). (11.32) Jo Jo

We shall see in the next section that this stochastic integral equation has to be un­derstood in the sense of Stratonovich and this is the reason why we used the standard notation o in the right most stochastic integral. We shall solve this equation as an evolution equation in the weighted Hilbert space 7iw = ^{TL ) . Given a positive function w(x) on the lattice TLd, this space is defined as the set of complex functions u(x) satisfying:

IMU = j £ w(x)\u(x)\2 < oo. yxezd

This space is a separable Hilbert space for the inner product derived from the polar­ization of the formula giving its norm. We shall assume tha t the weight w is such that there exist positive constants c\ and c2 for which:

w( x) ci < -fir < c2 (11.33)

whenever \x — xf\ < 1. Because of this condition, the discrete Laplacian A has a bounded extension to the weighted space 7iw. It is possible to rewrite the equation (11.32) as an equation in the Ito's sense. See formula (11.36) below. The new equation has exactly the same form. The only change is in the drift term: a linear operator has to be added to the drift /-cA. Consequently, in order to treat both the Stratonovich and the Ito's cases simultaneously, we show existence and uniqueness of the solution of the following equation:

u ( t , x ) = 1 + / [Bu(sr)](x)ds+ f u{syx) o dCs{x). (11.34) Jo Jo

in the Ito sense by considering a drift term [Bu(s} • )](x) where B is a bounded operator on the Hilbert space Tiw. Notice that for any (possibly random) function f ( t , x ) one has:

< su 0<s

p || / v(a , -)d(a(')\\l> < £ > ( * ) < SUP ( 7 v(",x)dCa{x)) > <t JO % 0<s<t V 0 /

= T^Q) f <\\v(s,.)\\l> ds. Jo

Page 48: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 39

Consequently, if we begin with u(°> £ 7iw and if we set:

u(n+V(t) = u^(t)+ f Bu(n\s)ds+ J u^(s)dCs Jo Jo then we get a sequence of random functions with values in the space 7iw and the classical proof can be used in the present infinite dimensional setting to show existence and uniqueness of a solution of equation (11.32) whether it is understood in the Ito or Stratonovich sense.

The above argument is standard. This proof is so short that we reproduced it instead of referring to one of the numerous papers or texts on infinite dimensional stochastic differential equations. Indeed, giving the proof seemed shorter than any attempt we made at checking that the assumptions of a specific theorem were satisfied.

Finally we note that, in order for the constant function UQ(X) = 1 to be used as an initial condition for the stochastic equation (11.32) we shall assume that:

£ u ; ( x ) < o o . (IL35) X

We now consider the case of equation (11.32) understood in the Stratonovich sense. This equation can be rewritten in the Ito sense in the form:

ti(t, x) = 1 + / [Bu{s, •)](*) ds + f u{s} x) dO{x). (11.36) Jo Jo

with:

B - KA + Ti

where T\ denotes the convolution operator associated to the spatial part of the covari-ance. If we assume that this covariance is integrable, i.e.

^ r 1 ( a ; ) < o o (11.37) X

then the drift operator B is bounded and the above result applies. We summarize the results of this section in the following:

Theo rem II.2.6 Let us assume that the weight function w(x) satisfies the conditions (11.33) and (11.35). Then the Ito stochastic differential equation (11.36) where the

Page 49: Parabolic Anderson Problem and Intermittency

40 RENE A. CARMONA AND S.A. MOLCHANOV

stochastic integral is understood in the Ito's sense, has a unique solution, say u^\t} x), in the weighted Hilbert space %w. If moreover the spatial covariance Y\ is integrable (i.e. if condition (11.37) is satisfied), then the Stratonovich equation (11.32) also has a unique solution, say u^s\tyx), in %w. In this case we have:

uW{t,z) = e-ir*W2uW{t,z) (11.38)

II.2.5 A Diffusion Limit Approximat ion Resul t

The purpose of the present section is to justify the study of the white noise case (presented in the previous section) as a limiting object of more realistic models. We consider a family of Gaussian fields {Q (x); t > 0, x £ TL } parametrized by a parameter e > 0 and we consider the limit e —• 0 of the corresponding family of solutions of equation (1.1). We shall try to write the results in this general setting, but the typical example we have in mind corresponds to the case:

rfe)(*)= ftPWds (11.39) Jo

where £][ (x) is a Gaussian field satisfying:

&(0(*) = 7&A»(*), * > 0 , x£7Ld

for some fixed mean zero Gaussian field &(x). As usual we shall assume that its covariance is the tensor product of a homogeneous time covariance and a homogeneous spatial covariance. The asymptotic regime of the small parameter e tending to 0 is usually called "diffusion approximation".

We set the notations and we state the assumptions corresponding to the more general framework in which we want to work. The need for this greater generality comes from a couple of models which are almost of the type described above and which we shall need to investigate in the sequel. We shall consider various equations of the type (1.1) or (11.32). We shall always restrict ourselves to nonnegative solutions so that uniqueness of the solution holds. Consequently, we consider that a given bounded nonnegative function UQ(X) has been chosen once for all to be used as initial condition for these equations. The cases of interest to us correspond to UQ(X) = 1 and to UQ(X) = 62(x) for a fixed site z € TL .

Page 50: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 41

We assume that, for each e > 0, the covariance T^ of Ct \x) ls g i y e n by a tensor product r(e) = TQ ® I j where IQ is given by a symmetric measure /i'e) on the first quadrant [0,oo) x [0,oo) in the sense that:

t^(s,t) = ([0,s]x[0,t]), s,t>0.

We also assume that:

sup 1 ^ ( 0 ) < 00 e>0

and that assumption (11.24) is satisfied for each e > 0 with a constant cj» independent ofe. For each e > 0 we assume that Ct (x) ls differentiate in t, i.e. that it is of the form (11.39) and we consider the solution u(c\t,x) of the equation (LI) with the random potential Zf (x) defined by formula (11.39). We proved in Lemma II.2.3 that the family of distributions of the corresponding solution random fields {u(c)(*> ar); t > 0, x £ TLd) was tight. Since the random field Ct \x)ls differentiable in time, the solution iz(c)(t, x) is given by the Feynman-Kac formula:

u^it.x) = TE{uQ(Xt)efo^^x^)ds}. (11.40)

Let us fix an arbitrary sequence {en; n > 1} of positive numbers converging to 0. Extracting a subsequence if necessary, we can assume without any loss of generality that both Ct (x) a n ^ u(€\t}x) converge in distribution. Let us denote the limits by Ct(x) and u(t,x) respectively. Using Skorohod representation theorem, we can assume without any loss of generality that the convergences are in fact almost sure. One may have to redefine all the stochastic processes and fields on a new probability space but this will not affect the rest of the proof. The goal of this section is to identify u(t,x) as the solution of the Stratonovich equation (11.32). We prove:

Theo rem II.2.7 Let us assume (for the sake of simplicity) that I j = Ti, that UQ(X) is a nonnegative function on the lattice ZT*, that assumption (11.24) i>s satisfied for each e > 0 with a constant CT independent of e, that Ct (x) *s differentiable in t and that:

limflc)(M) = s M

Page 51: Parabolic Anderson Problem and Intermittency

42 RENE A. CARMONA AND S.A. MOLCHANOV

for all s,t > 0. For each e > 0 we denote by u^(t}x) the solution of the fundamental equation (LI) with potential £t(x) equal to the time derivative of Q (x) and initial condition uo(x). Then the field u^(t} x) converges in distribution as e \ 0 toward the solution of the Stratonovich equation (11.32).

Proof: The discussion above shows that u^(t}x) is tight. It also shows that it actually con­verges because Ct (x) converges in distribution toward an infinite dimensional Brow-nian motion Ct{x) w ^ h covariance (s At)Ti(x — y) and because the limit of any sub­sequence, say ix(t,x), is necessarily given by the Feynman-Kac formula. This fact is easily proven by using the almost sure convergence given by the Skorohod represen­tation discussed above and the uniform integrability given by the estimate (11.25). It remains only to identify this u(t.x) as the solution of the Stratonovich equation (11.32). The rest of the proof follows the lines of the finite dimensional case as treated in [22]. It is long and technical so we only give the main ideas. Let us introduce the Gaussian field Qn'(x) which is continuous and piecewise linear in t and which satisfies Qn'(x) = Ct{x) when t is of the form fc2"n. Notice that its time derivative $ (x) is given by the formula:

rfB)(«) = 2" (QC.)-&.(*)) if one uses the notations tn = [2nt]/2n and t+ = ([2nt] + l ) / 2 n . We then consider the solution v,(n)(t,x) of equation (1.1) for the potential Qn'(x). It is given by the classical Feynman-Kac formula, i.e.

ti<n>(t,x) = ^x{uo{Xt)efo^l\(Xs)d»y

For each fixed sample path X. of the random walk, the integral in the exponential converges to the desired quantity. More precisely,

Km ['&\(X.)ds = f dCt-3(Xs)ds. e \ JO JQ

Moreover, one can compute the moments of the exponential of this integral in the usual way and conclude that the family is uniformly integrable. This implies that, for each fixed t and x) the random variable u^ii^x) converges, in all Lp-spaces to the random variable u{t^x) given by the Feynman-Kac formula (II.6). A similar argument shows that the finite dimensional distributions also converge. It remains to prove that

Page 52: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 43

the finite dimensional marginal distributions of u(t,x) coincide with the corresponding marginal distributions of the solution of the Stratonovich equation (11.32). As above, we only argue the case of the single distribution of u(t, x) for t and x fixed. The desired conclusion follows from the fact that :

Mt = u(i , x) - u0(x) - / [A + r i ] u ( s , • )(x) ds Jo

is a martingale and that :

[ M , M ] t = 1^(0) f'lufaxtfds. Jo

We are working with any filtration T% which is admissible for the infinite dimensional Brownian motion (^ i.e. a filtration with respect to which all of the processes {Ct{x)] t > 0} are Brownian motions. The proof is as follows. First one notices that :

Au(s,x)ds | Tt)

rt+h = lim (u^(t + M ) - u (n)(*>x) - / At i ( n ) (s ,x)ds I Tt)

n — + 0 0 x v Jt ft+h

= l i m ( / Zin\x)u(n\s,x)ds\Ft)

and then one rewrites the integral between t and t + h as a sum of integrals over intervals of the form [k2~n, (k + l ) 2 - n ) . These integrals are then rewritten as:

v.(n\k2~n,x) / &\x)ds Jk2~"

f(k+l)2-» + / tln\x)[u(n\s, x) - u(n\k2~n,x)]ds

Jk2~n

= vSn\k2~n,x) tin)(x)ds Jk2~n

f(k+l)2~n fs

+ / £in )(*) / AuW(a,x)dads Jk2~n Jk2~n

f(k+l)2-n rs + / 6n)(x) / ^)(x)u^(a,x)dads.

Jk2~n Jk2~n

Page 53: Parabolic Anderson Problem and Intermittency

44 RENE A. CARMONA AND S.A. MOLCHANOV

The properties of the last integral imply the appearance, in the limit n —* oo, of the Stratonovich correction / / + Tii^s, -)(x)ds. The details are rather cumbersome. We omit them. I

II.3 Existence and Equations for the Moments This last section is devoted to the proof of the existence of the moments of all orders of the solutions u{t,x). The existence proof requires only a soft analysis argument and it applies consequently to both the white noise case and the finite time length correlation.

Let us assume that x\, . . . , xp are fixed points of the lattice TL and let us also fix t > 0. Under these conditions one has:

m(t, xi, • • •, xp) = (u(t, xi) • • • u(t, xp))

= (EX1 {efo dUXt-s)} lEXp{efo «•<*—>})

= JEX1 ® • • • ® E„{(e/o -C.(^I>J+"-+/0t-C.(xW ) ) }

where {**(1); * > 0}, • • •, {^(

(P); t > 0} denote p independent copies of the continuous time random walk on the lattice TL with diffusion coefficient K. We shall use the following notation, x — (x i , - - - ,x p) will denote a generic point of the lattice 7Lpd, while {Xt = (X\ , •••,-X'f ); t > 0} denotes the standard continuous time random walk on the lattice 7LV with diffusion coefficient K and JE(Xli...tX \{ • } denotes the corresponding expectation JEXl ® • • • ® lE>xp{ • }• Consequently we have:

m( t ,x 1 ) - - . ,x p ) = E :r{(e i4)}

with:

Jo Jo

Notice that, for each fixed path of the random walk, the random variable A is Gaussian with mean 0 and variance:

, T - i Jo Jo i,,7 = 1

Page 54: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 45

= t E([ct_T(oiAt(4,))-ct.I<.)At(4i))] «J=1 h,k>0 "+ 1 *

0">\ /• ... firUh

p

u^)-^m = £ EM^-^SjAM-^AtlxIt-^AM-^A*])

t , j= l /i,Jk>0

Ti^f-^) ft /•* p

Consequently: ( ^ ^ r x C O M M x [(),<])

which is finite for every t > 0. We summarize the result of the above computation in the following:

Proposition II.3.1 Let us assume that the mean zero Gaussian field {Qt{x)\ t > 0, x £ Z?1} has a covariance T = To ® Ti of the tensor product type with a temporal part given by a measure /i on the first quadrant [0, oc) x [0, oo). Then the random field u{t,x) given by the Feynman-Kac formula (II.6) has moments of all orders. They are given by the formula:

m(t,Xl>...,Xp) = ^ ^ { e ^ o V o ' E U r , ^ - ^ ) ^ , ^ ) ^ ( I M 1 )

whenever t > 0 and x\, • • •, xp £ 20?.

In the particular case 0(x) = !tQia{x)ds formula (11.41) becomes:

*.(*,,,,...,,„) = E{xi,...,xp){e* K/.' r.<*-*>EU W«-X%) <M*}. ( n 42)

We now concentrate on the limiting case of a white noise in time and we derive a set of closed equations for the moments of the solution. So we restrict ourselves, from now on, to the particular case:

r0(s -t) = 60(s - t). (11.43)

Page 55: Parabolic Anderson Problem and Intermittency

46 RENE A. CARMONA AND S.A. MOLCHANOV

and we prove:

Theorem II.3.2 For each integer p > 1, each t > 0 and each x = (xi, • • •, xp) £ Wpd

let us set:

mp(t ,x) = (u(t, an) • • • u(t, xp)) (11.44) w;/iere i/(f,x) = u^{ttx) is the solution of the Ito's equation (11.36). Then these moments satisfy the following parabolic equation:

imv = «(A a ? 1 +.- - + A r p ) m p + ] T ri(xj-xk)mp (11.45) 01 l<j<k<p

with the initial condition mp(0, • ) = 1.

It is convenient to write A x = AX1 + • • • AXp and:

Vp(x) = J2 I M ^ - a * ) l<j</:<p

for p > 2 and Vi(x) = 0. With these notations, equation (11.45) becomes:

dmP „ , . -gf = HP(K)rnp

where the operator HP(K) — /cAx + V(x) is a p-particle Schrodinger operator on the lattice 2Lpd.

Proof: The proof we give does not use the explicit form of the moments as given by the Feynman-Kac formula. See the remark below. The proof is slightly longer than the proof discussed in the remark below but it is extremely useful in situations where the Feynman-Kac formula does not hold in a nice form. See for example [1]. It uses Ito's stochastic calculus and the fact that u(t ,xi), ••• , uft^Xp) are semimartingales. Consequently, if (f is a smooth function of p real variables, Ito's formula gives:

(p(u(t, xi), • • •, u(t, xp)) = (p(u(Q, xi), • • •, u(0, xp))

A [* d<p + V / ~K— (u(s, XX), • • • , t i(s, Xp))dw(fi, Xj)

i p /•* e v + 2 ^ Jo ^ ^ ( w ( s ' : C l ) ' ' * , ' t i ( 5 ' : c p ) H l / ( 5 ' x i ) ' t i ( 5 ' ^ ) ]

Page 56: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 47

and if we choose VP(UI, • • •, up) = u\ • • • up we get:

u(t, xi) • • • u(t} xp) = u(0, a?i) • • • tx(0, xp)

Y^ / ACU(S, xi) • • • u(sy XJ-I)AU(S} Xj)u(s) Xj+i)u(s, xp)ds

v *t + J2 ti(s,xi)-..ii(s,Xj-1)ii(fi,xJ-)ti(s,Xj+i)...ti(s,xp)dCj(«)

+ Y^ ri(xj-xk) u(s,xi)--u(s)xj)>-u(s,xk)--u(s)xp)ds \<3<k<p J°

which gives the desired result. •

Remark: There is another way to derive the moment equation. It is appropriate for the solution u(tyx) = u(5)(f,x) when the equation is understood in the sense of Stratonovich. Indeed, the solution is then given by the Feynman-Kac formula. In this case, plugging (11.43) into (11.42) gives:

m p ( t J * l l - . . | *p ) = E ^ ^

and it is easy from there that the moments mp(t, x\) • • •, xp) of the Stratonovich equa­tion are still solution of the deterministic parabolic equation:

for the potential:

i<j,k<P

The difference between the Ito's moment equation and the Stratonovich's moment equation is of course consistent with formula (11.38) linking the two solutions.

We close this chapter with a remark concerning the moments of the fundamental equation (1.1) after the e-scaling. In other words, let us set:

Page 57: Parabolic Anderson Problem and Intermittency

48 RENE A. CARMONA AND S.A. MOLCHANOV

m^(t,x1,---,xp) = (u^(t,x1)--.u^(t,xp)) (11.46)

for the moments of the solution of the scaled equation. In this case we have:

mW(t,«l,..,,,) = E ( , r ^ K V ^ > l r 1 ( x M , * I * » ) ) ( n 4 7 )

for each fixed e. It is easy to see that:

l imm^(*,xi , - -*,Xp) = m(t,ari,- •• ,xp) (11.48)

and this is one more reason to study the easier case e = 0 as an approximation of the more realistic but more difficult case e > 0 and small!

Page 58: Parabolic Anderson Problem and Intermittency

Chapter III

MOMENT LYAPUNOV EXPONENTS AND INTERMITTENCY

This chapter is devoted to the discussion of the asymptotic properties of the mo­ments of the solutions of the fundamental equation (1.1). The asymptotic behavior of these moments is exponential. It is determined by a limiting quantity called the average Lyapunov exponent. In the particular case of a white noise in time, we characterize them as the supremum of the spectrum of some JV-body Schrodinger operators (on the lattice 2Z ). This characterization is of crucial importance be­cause it makes it possible to use perturbation theory and the techniques developed in the spectral theory of many body Schrodinger operators to study the depen­dence of these Lyapunov exponents upon the diffusion coefficient /c and/or the coupling constant a.

We explained in the introduction that the notion of intermittency of the solution field u(t,x) is related to the properties of these exponents. We show existence of the Lyapunov exponents in full generality and we study the intermittency of the fields first in the white noise case, and second in some more general models by perturbation techniques.

We assume throughout this chapter that {Ct(z)i * > 0, x £ TL } is a mean zero Gaussian field of the tensor product type in the sense tha t its covariance function is of the form:

r ( (« ,x ) > ( t , y ) ) = f 0 ( « > < ) r 1 ( z - y )

49

Page 59: Parabolic Anderson Problem and Intermittency

50 RENE A. CARMONA AND S.A. MOLCHANOV

where Ti(x — y) is the covariance of a stationary Gaussian field on 7L and where fo(s,f) the covariance of a Gaussian process with homogeneous increments. In other words, we assume that TQ is the distribution function of a symmetric fi measure on the first quadrant in the sense that:

fo(M) = /i([0,s]x[(M]).

The potential &(z) appearing in the fundamental equation (LI) is to be thought of as the time-derivative of this field Ct(x)- When Ct(x) is actually differentiate with respect to t, its time derivative &(#) is also a homogeneous Gaussian field with a covariance of the tensor product type. In this case, the measure /* is of the form:

/i(ds,dt) = To(s-t)dsdt

where To is the time component of the covariance of £t(x). In the particular case of a time white noise, the measure /i is a multiple of the Lebesgue measure on the first diagonal. In other words one has:

r 0(s-t) = 6o(*-*)-These are the two most important cases which we shall consider in this chapter.

III.l The White Noise Case

Throughout this section we assume that:

r 0 ( f i - t ) = 6o(s-t)>

and we denote by ti(t, x) the solution of the Ito's stochastic equation (11.36).

I I I . l . 1 E x i s t e n c e of the M o m e n t Lyapunov Exponent s

The purpose of this subsection is to prove the existence, for each integer p > 1, of the limit:

lim - logm p ( t , x ) (III.l)

for all p > 0. This limit is called the p-th moment Lyapunov exponent of the solution u(£,£). Later we study its dependence upon the diffusion parameter K. We shall see

Page 60: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 51

that it does not depend on the point x = (a?i, • • •, xp). We use the notation JP(K) for the limit (III.l).

We first identify this Lyapunov exponent with the spectral radius of the operator HP(K) occurring in the parabolic equation satisfied by the moment mp( t ,x) .

Lemma III.1.1 Let n > 1 be an integer, V(x) a real valued bounded function on the lattice Z11 and let us define the operator H(K) = KA + V as a bounded self-adjoint operator on the Hilbert space % = £2(Zn). We denote by r+(Ac) = r+(//"(/c)) the supremum of the spectrum of the operator H(K). AS a function of K, T + (K ) is nonincreasing continuous and convex and r+(0) = ||V||oo« Moreover, if <p(x) is a real valued bounded nonnegative function on the lattice ZT1 which is not identically zero, and if we let m(t, x) be the solution of the parabolic equation:

— = H{n)m

with the initial condition m(0,x) = (f(x), then the limit:

lim - logm(f,x) t-+oo t

exists for every x £ ZF1 and is equal to the number T + ( K ) .

This is a known result in the theory of Schrodinger operators. See for example [21]. We include a proof for the sake of completeness. We use the specifics of the lattice case to this end.

Proof: We are obviously in the conditions of existence and uniqueness for the parabolic prob­lem and the solution is given by the Feynman-Kac formula:

m(tyx) = JEx{^(Xt)efoV{Xa)d3}

As before, {Xt; t > 0} denotes the nearest neighbors continuous time random walk on 7Ln with generator KA. We shall also denote by N(t) the number of jumps before time t. If a > 0 is a constant (to be chosen later), we have:

m(t,x) = JEx{lp(Xt)efoV^x')ds-> N(i)<ai)

lEx{ip(Xt)efov(X'Vs; N(t) > at)

= (*) + (") (M.2)

Page 61: Parabolic Anderson Problem and Intermittency

52 RENE A. CARMONA AND S.A. MOLCHANOV

If the constant a is chosen so that a > 2UK then a simple large deviation estimation gives:

V{N(t) > at] < e-2™Ma/2n*)

where the function h is defined by:

h(t) = <logt + 1 - t , * > 1.

Notice that h(t) > 0 and that h(t) / oo as f / oo. This estimate implies that:

(«) < |b||oo^ l |V | |ooe-2nK/l(a/2nK) (III.3)

where we used the notation || • ||oo for the supremum of a function on the lattice 2Zn. On the other hand:

( i ) < E ^ ( I ( ) e / » ' W V < T }

where T denotes the first exit time of the ball Bt(x) = { y £ TLn\ \y — x\ < at}. The above right hand side can be rewritten as [e~* WlBt(*)^>](x) if we denote by H(n)\Qt^ the restriction of the operator H(K) to the ball Bt(x) (also called the operator H(K) in the ball Bt(x) with Dirichlet boundary condition outside the ball). In any case, this remark gives:

(0 < IMb(B t (*))er+(K,B, (* ) ) t

where r+(/-c, Bt{x)) denotes the spectral radius of the operator iJ(/c)|£t(x). Notice first that r+(«:,fl^(x)) < r+(K). Second that:

M\L>(Bt{x)) < \\<p\Uat)n'2

and consequently we have:

(0 < lbll=o(at)n/2e r+W (III.4)

and putting together the estimates (III.3) and (III.4) one gets:

limsup - logm(t,x) < r~*~(K) (III.5) t-KX> t

if one recalls (III.2). In order to prove the lower bound:

liminf -logm(t,ar) > r+(K) (III.6) t-+oo t

Page 62: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 53

one proves that:

liminf - logm(f ,x) > r+(Ac, D) t—*oo t

for any finite subset D C 2Zn. As before, we denote by r + ( / c ,D) the spectral radius of the restriction H\Q of the operator H to the domain D, The desired lower bound (III.6) will follow from the fact that ^(K^D) increases to r + ( « ) when D f TLn. Let D be a fixed finite subset of 2Zn. As before for any x £ D one gets:

m(t,ar) > "Ex{v{Xt)eSoV{Xa)ds't < rD}

= [etHD?](x)

where we used the notation TD = T^c for the first exit time of the domain D and where Aj^i > \D,2 > ^#,3 > ••• are the eigenvalues of H\Q and V>£>,i,0£>,2, $DtZi • • • the corresponding normalized eigenfunctions. Notice tha t they are in finite number since the domain D is finite and the restriction H\Q can be viewed as a matrix. Perron-Frobenius theory says that \D,\ > ^D,2 a n d that ipD}i{x) > 0 for all x € JD. Since y? is not identically zero, this implies that (tp,il>Dj) ^ 0 for D large enough and consequently:

lim inf - log m(t, x) > Xp i

which gives the desired result since \pti = T+ (AC,Z)) . This completes the proof of the lemma. The remainder of the proof is easier, indeed, the classical variational principle gives:

T+(AC) = sup (Hip,<p)

n SUp -K £ £>(X + ej) - <p{x)? + £ K(«)H*)|2

v>6f2(^"),|H|=i r € ^ « j = i xezn

where we used the notation ej = (0, • • •, 0 ,1 ,0 , • • •, 0) where the 1 is the j-th entry, for the canonical basis of TLn. It follows tha t T + ( K ) is a concave function of K which is continuous (since it is finite) and nonincreasing. The formula r + ( 0 ) = ||V||oo is plain.

Page 63: Parabolic Anderson Problem and Intermittency

54 RENE A. CARMONA AND S.A. MOLCHANOV

Remark If the potential function V is summable, then the multiplication operator by the function V is a compact perturbation of ACA and WeyPs theorem on the perturbation of the essential spectrum (see for example [6] Chapter II) says that the operators H(K) and KA have the same essential spectrum, i.e.

EM,(ff(*)) = S C 5 5 ( K A ) = [-8/cd,0].

In other words, either r+(/c) = 0 or T + (K) is an isolated eigenvalue of finite multiplicity. This implies in particular that r+(/«c) is an analytic function of K on the open domain where it is positive. See for example [18].

Let us come back to the moment equations. We reintroduce the parameter p. and we apply the above results with n = pd. The above lemma and TheoremII.3.2 of the previous chapter imply the existence of all the moment Lyapunov exponents 7 P (K) . Notice that:

ll(K) = r+(H1(K)) = Q (III.7)

because V\(x) = 0. Moreover, the case K = 0 can be solved completely by inspection. Indeed:

7 , (0) = r+(ffp(0)) = Halloo = ^ - ^ I M O ) ,

We used the fact that ||ri||oo = Ti(0).

P> 2. (III.8)

The following convexity properties of the Lyapunov exponents will be of crucial im­portance in the sequel.

7P(«) < 7P+I(«) p ~ p+ 1

p > 0 , K > 0 . (III.9)

This first fact follows immediately from the easy consequence of Holder's inequality:

< u{t,xf > x / p< < ti(f,z)p+1 > 1 / ( P + I ) . B

The second property is hardly more difficult to prove. It reads:

Page 64: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 55

7 p ( K ) < 7 p + f c ( l C ) t 7 p " * ( K ) . p,h,p-h>0,K>0. (111.10)

Indeed, because once more of Holder's inequality, the obvious formula:

u{i}x)p = u(f,x)(p+/l)/2ii(t,x)(p~ / l)/2

implies that, :

< u(t,xY > 2 < < u{t,xy+h > < w(t,x)p-* >

and the rest is plain. |

The following result is an easy consequence of the properties (III.9) and (III. 10) which we just proved. It was derived in [10] from a more involved argument.

Theorem IIL1.2 If JP(K) > 0 for some integer p > 1, then we have:

l M < l l ^ < l ^ < . . . . (111.11) p p+l p + 2 v '

Proof: Notice that 7o(«) = 0 because < ii(t,x)0 > = 1. Also, for every integer p > 1, 7 P ( K ) / P is the slope of the line through the origin and the point (p, JP(K,)/P). Let p be the smallest integer for which 7P (K) > 0. If JP(K)/P = JP+I(K)/(P + 1) then the point (p + 1 , 7 P + I ( K ) ) is on the line through the origin and the point (p, 7p(/c)/j?. But the convexity property (III.10) implies that all the points (n,7n(Ac)) must be, if n > p, above the line through the points (p — 1,0) and (p,7p(/c) and a fortiori above the line through the origin and the point (p,7p(«)/p). This gives a contradiction. The proof is now complete. I

The result above justifies the following definition.

Definition III.1.1 For each K > 0 we let p be the smallest integer for which 7 P (K) > 0. As usual we use the convention that the infimum of the empty set is oo. When p < oo we say that the solution field u(t,x) shows (asymptotic) intermittency of order p and full intermittency when p = 2 (remember that p > 2 because 71 (AC) = 0).

Page 65: Parabolic Anderson Problem and Intermittency

56 RENE A. CARMONA AND S.A. MOLCHANOV

The main concerns of this chapter is to determine in which regime (i.e. for which values of K) the solution u(t,x) exhibits an intermittent behavior in the sense of the above definition.

III.1.2 An Explicitly Solvable Model

Throughout this subsection we restrict ourselves to the particular case of a space-time white noise. In particular, we assume that the space component of the covariance is also given by a delta function, i.e.

r i (x) = «0(x), x£Xd (111.12)

A space-time white noise is usually difficult to handle in the continuous case of HV*, but, because we are considering the lattice 2Z , this assumption is a simplifying factor. It implies for example that the Brownian motion processes {Ct{x)] t > 0} are independent for different x £ TL . Moreover, as if the space-time assumption was not restrictive enough, we limit ourselves to the second moment corresponding to p = 2. In our defense we shall see later that, because of the inequalities which we proved above, it will be possible to deduce many results from the detailed knowledge gained in this subsection.

In fact, the contents of this subsection are of crucial importance for the understanding of the intermittency phenomenon in general.

We first say a few words about the probabilistic approach. Its starting point is the formula:

7 2 ( K ) = Urn ilogE (0 io){e/o W ^ - ^ V * } .

It is easy to see that the process Xt = X\\ ' — X^ ' is a continuous time random walk on the lattice TL . It starts from the origin and its generator is 2KA. Consequently, one shall use the formula:

72(«) = lim \ logE0{e/o r > ^ ) d s } . (III. 13)

for the investigation of the properties of 72(AC). In other words, 72(«) is the supremum of the spectrum of the (deterministic) Schrodinger operator:

H2 = 2«A + ri.

Page 66: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 57

It is possible to rewrite 72(«) as a limit of expectations over the paths of the standard random walk with generator A, say Xs. This is done via a simple scaling argument.

7 2 ( K ) = £ m j l o g E o { e ^ / o ri(x<)ds). (111.14)

Computations are possible in the particular case of a space white noise for which Ti(^ — y) = ^o(z — y)- Indeed J0 r i (X s ) ds is the time spent at the origin before time t and it can be written as the sum of a random number of independent random variables. It is then possible from there to derive some of the results which we prove below. They depend upon the dimension d. In the probabilistic approach the dependence upon the dimension comes from the transience-recurrence properties of the random walk. But the analytic approach (based on the spectral analysis of the operator #2) is simpler and we shall stick to it from now on. In this approach 72(«) is viewed as the supremum of the spectrum of H<i- Ti is a compact perturbation of the operator 2KA and consequently, H2 and 2/cA have the same essential spectrum (see for example [18] or [6]). Since E ( 2 K A ) = [—8d/c,0], either 72(«) = 0 or #2 has positive eigenvalues (of finite multiplicity). We make explicit computations in the particular case of a space white noise for which Ti(x — y) = 8Q(X — y). In this case Ti is a rank one perturbation of 2KA and there is at most one such positive eigenvalue. Such an eigenvalue A exists if there is an element / £ t{TL ) such that # 2 / = A/. Because we are interested in the supremum of the spectrum, one can assume without any loss of generality that / is positive. For convenience we normalize / in such a way that /(0) = 1. In this case Fif = Ti and # 2 / = A/ can be rewritten in the form:

/ = (-2/cA + A)- 1r i

which implies:

l = ((50j(-»2«A + A)-160).

Using Plancherel formula one gets:

(27c)d Js* 2K$(y>) + A

where $(<p) = 2 ^ f = 1 ( l — cosy?t) is the symbol (in Fourier domain) of the (nega­tive) discrete Laplacian —A and where <p = (y>i, • • •, (fd) is a generic element of the cf-dimensional torus S . Consequently, the supremum of the spectrum of H2 is a nonnegative solution of the equation:

Page 67: Parabolic Anderson Problem and Intermittency

58 RENE A. CARMONA AND S.A. MOLCHANOV

_ 1 r d (2w)d Js* ®(<p) +

d<p AO/(2K) '

(111.15)

Theorem III.1.3 The spectrum £(#2) of #2 satisfies the following alternative: a) Either £ ( # 2 ) = [-8<te,0]. b) or £(#2) = [—8dK,0] U {Ao(x)}> where \Q{X) > 0 is the unique positive solution of equation (III. 15). Equation (HI. 15) has a positive solution for any K > 0 if d = 1,2 and for K < K^^T if d > 3. The critical value «2,cr is defined by:

K2,cr " (2w)* L dip

(2w)d Js* 2${<p)

In particular, alternative a) occurs only when K > «2,cr o,nd d > 3.

Proof: Let us define the function a c-> /(a) by the formula:

I [ d<p / ( a ) = (2^p 7s* a + <%>)*

The variations of /(a) are sketched in the following figure.

Figure III. 1.1 Variations of t i e function a <-> 7(a).

Page 68: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 59

Notice that:

1(a) ~ — as a —• oo a

in all dimensions d > 1, that:

in dimension d = 1, that:

in dimension d = 2 while:

/(a) ~ -7= as a \ 0 yja

1(a) ~ log - as a \ 0 a

«cr = j/CO) < OO

in dimension d > 3. The proof is now complete. |

Notice also that:

1(a) = 1(0)-cy/a +0(a) as a\0

in dimension d = 3, that:

1(a) = 7(0) - ca log - + 0(a) as a \ 0 a

in dimension d = 4 and finally that:

/(a) = 7(0) -ca + 0(a2) as o \ 0

in dimension d > 5. This information can be translated into properties of the Lyapunov exponent 72(K)- For example one gets:

72(«)

72 («)

72 (K)

c 1 ~ —, for K —•00 and d = 1 1

K 1 ~ 2/ce~1/ (2K) , f o r K - 4 0 0 and d = 2 1

= 0, for K > KCr = / ( 0 ) / 2 and d > 3. 1

Page 69: Parabolic Anderson Problem and Intermittency

60 RENE A. CARMONA AND S.A. MOLCHANOV

It is not always possible to determine the behavior of 7P(/c) for K /* KPtCr. The particular case of of the second moment for the space-time white noise model is one of the race instances in which one can actually determine this behavior explicitly. For this reason we summarize the results we proved above on the approach to the critical value in a box:

7 2 ( K ) ~ (K — KCr) > for * /* Kcr and d = 3

In 72(K) ~ {^cr — «) In , for K f1 Kcr and d = 4

7 2 ( K ) ~ K2,cr — «, for K / * Kcr and d = 5.

These results are summarized in the following figure.

J rx(0)/2

J

ri(0)/2

* Ti(0)l2

k

\ d=1,2

k

V d=3,4

V d25

K«r

• K

K

*".

Figure III . 1.2 Variations of t i e second moment Lyapunov exponent 72(«) as a function of K.

Page 70: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 61

III .1.3 F i r s t General Propert ies

Notice that a simple scaling argument gives:

7p(/c) = r+(«A + Vp) = «r+(A + -Vp). (111.16)

We shall use this formula many times in the sequel. We first prove an elementary lower bound.

Proposi t ion 111,1.4 For any d > 1 and any p>2 one has:

^ > fc^CO) - 2dn) (111.17) P \ 2 / +

Proo/: The proof of this lower bound is very easy, and as a consequence one might suspect that a finer estimate may be proved with more work. If one uses Proposition (II.3.1) expressing the moment m(f,x) in terms of an expectation over the paths of the con­tinuous time random walk on %v , one gets a lower bound by considering only the paths which did not leave the point x before time t. In this way one gets:

m(t,z) = JEx{efov^Xa)ds}

> Bx{e/ov*<*')dj; AT(t) = 0} = etyp^JP0{N(t) = 0}

and this proves the lower bound. •

We now prove a very useful upper bound:

Proposi t ion III. 1.5 If p is of the form p = YA=I PiPi for some /?,- > 1 and pi > 2 satisfying pi ^ pj if i ^ j , then we have:

7P(«) < (P - 1) E ^ y 7 w ( 7 f r K ) (IIL18)

and m particular one has:

Page 71: Parabolic Anderson Problem and Intermittency

62 RENE A. CARMONA AND S.A. MOLCHANOV

7 P ( « ) ^ P ~ 1 s^fe)- <"I19> P _ 2 Vp whenever d > 1 and p is even.

Proo/: Let us consider the partitions J of the set {1, • • • ,p} into /?i groups containing p\ ele­ments, /?2 groups containing p2 elements, . . . and finally (3n elements with pn elements each. We shall use the notation:

{l,...,p} = U?=1U?i1/,-j for such a partition I . The # disjoint sets I{j have exactly pt- elements. Notice that there are exactly:

P = p\ overfall (p2\f2 • • • (pn\fnPiW •••/?«!

partitions of this type. The factors /?i!, #2!, ••• , f3n\ appear in the denominator because we do not consider the order in which the groups occur. Each integer pair i < j is contained in exactly:

p' = p{ +... + fn

such partitions where:

p , = (p-A (p-PiV-\pi-2)fa\)K(p2!)b.-.(pi\)*-*..-(pny^hW..-- (/?; -1)!- ••/?„!

_ pijpi - l)A p ( p - l )

Finally we set:

A, = j ^ (iiuo)

so that:

1=1

Page 72: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 63

With all the above notations we have:

n 0i E f TrixP - X(j))ds = E E A « £ E f WW - xP)ds.

Consequently:

mp(t ,x) = E x

l / P

n n «P[A.- E E /' ri (xp - xP)ds]) i i=i j=i«'<j',«',j'€/,,> ° J

< nEx|f[exp[PA,E E / V ^ - ^ ' V ] } r [,=1 j=ii'<j;i'j'eiijJ0 J

< n n E x j e x p ^ E E fr,(xW-*«'>)*]}

< nEx(eXp[PA,E E f ^(XP - X^)ds}\ «=1 I i=y-i'<3',i'j,ZU,j ° J

where we first used Holder's inequality and second the fact that the factors labeled by i are independent because of the choice of the partitions and finally the fact that the expectations do not depend on the particular partition in J . Let us denote by Rt(x) the quantity in the above right hand side. One can now easily see that:

Hm j log Rt(x) = XiPfajn(^) + • • • + A n P / ? n 7 p n ( ^ )

which completes the proof of the estimate (III.18) if one recalls the definition (HI.20) of the At-'s. Finally, we notice that the estimate (III.19) follows immediately from (III.18) by choosing n = 1, p\ = 2 and /?i = p/2. I

R e m a r k Choosing the constants A,-'s differently it is possible to prove the estimate:

Pi , K

7p{*)<(p-l)^lkj7pi(jrZj)

when as before p = £5=i PiPi> 2 < p\ < p2 < • • • < pn and /?i, • • •, /?n > 1- We shall not prove this estimate because we do not need it in the sequel.

Page 73: Parabolic Anderson Problem and Intermittency

64 RENE A. CARMONA AND S.A. MOLCHANOV

I I I . 1 .4 Smal l K B e h a v i o r of 7P(/c)

We first consider the small K behavior of the Lyapunov exponent. We use classical perturbation theory to get the small K behavior of the largest eigenvalue JP(K) = r+(Hp(K)) of the operator HP(K).

Proposi t ion I I I . 1.6 For every d > 1 and p>l one has:

^ ^ = £ ^ r i ( 0 ) - 2dn + 0{K2) (111.21) P 1

for K —» 0.

Proof: We already saw that 7P(0) = p(p—l)Ti(Q)/2. Moreover we noticed that the maximum ||V^|| of the function Vp(x) is attained only for the value x = 0. and that this maximum is isolated from the other values of V^(x). Consequently, since the operator Hp(0) is the operator of multiplication Vp by the function V^(x), its largest eigenvalue r+( i /p(0)) is equal to the largest value of the function Vp and it has multiplicity 1. In order to control the largest eigenvalue of HP(K) for small K we regard the operator HP(K) = KA + Vp

as a small perturbation of the self-adjoint operator Vp. The family HP(K) is analytic in the sense of Kato and its largest eigenvalue is given by an asymptotic series. See for example the Chapter XII of [19]). We have:

r+(Hp(K)) = r+(Hp{Q)) + K(A6(h60) + O{K2)

= P ( P2"" 1 )r 1(Q) + 2pd/c + Q(/c2)

which is the desired result. I

Notice that the above result gives a simple alternative proof of the lower bound (III. 17). Indeed, the latter follows immediately from the fact that the function K «-*- 7p(«)/p is a nonnegative convex function and that, by the above proposition, its derivative for K = 0 is equal to —2d.

I I I . 1.5 L a r ge K B e h a v i o r of 7P(/c)

We now consider the asymptotic properties of JP{K)/P for large values of K. These properties are dimension dependent. We first consider the case d > 3.

Page 74: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 65

Proposi t ion III.1.7 If p > 2 and d > 3 we have:

7 P (K) = 0, for K> Kd^cr

for some critical constant Kd)P)Cr.

Proof: We use the formula (III. 16) and we consider the properties of the spectral radius r + ( A + K~1VP). We use a simple lemma which is now standard in the path integral approach to the analysis of Schrodinger operators. It is often called Khasminkii's lemma. See for example [21]. In the present context it can be stated in the following way: if V is a nonnegative function on TLn and if X\ is any Markov process in TLn and if:

then:

f°° sup JEX{ / V(Xt)dt} < 1 (111.22)

sup E x { e i o ° ° v ( x ^ } < o o . x€Zn

We apply this result with n = pd and V = K~lVp. Indeed, if Xt denotes the continuous time random walk in 7Lpd with diffusion coefficient K = 1, then we have:

E A r wP(xt)\dt} < E n w . * P ) { r |ri(**o) - x*k))idt]

£ E(*,-**){/ iri(yt)i<ft} l<j<k<p

where Y* is a continuous time random walk in TLP with diffusion coefficient K = 2. Consequently:

E , { / ° ° Kp(Jf«)*> < ^ p ^ ^ p [GirxlK*)

where G(x,y) denotes the Green's kernel of the continuous time random walk in 2Zrf

with diffusion coefficient K = 2. Since Ti is assumed to be integrable,

Page 75: Parabolic Anderson Problem and Intermittency

66 RENE A. CARMONA AND S.A. MOLCHANOV

[G|riD(*)= £ G(x,y)Ti(y) y£Zd

is a bounded function of x € 2£d if d > 3. Consequently, if d > 3, the assumption (111.22) is satisfied for K large enough, say for K > K<f)P|Cr. Under these conditions, Khasminskii's lemma implies that:

7P (K) = Kr+(A + K-1Vp)

= « U m ± log ^ { e " _ 1 £ > ' < * > ! *>

= 0

if K > Kd}p,cr' We choose Kd,p,cr t° be the smallest value of K for which 7 P ( ^ ) — 0 and,

by convexity, we have automatically 7 P (K) > 0 whenever K < K(f,p,Cr- •

Hence, when d > 3, the moment Lyapunov exponents 7 P (K) of order p > 2 vanish for large K and the strict inequalities (III. 11) giving the full intermittency are not satisfied. In other words, intermittency holds only for small values of K.

We now consider the cases d = 1 and d = 2. The goal is to prove that the solutions of the parabolic Anderson problem are asymptotically intermittent for all the vales of the diffusion constant K. This will be a consequence of the following result.

Proposi t ion III.1.8 If p > 2 and K —• oo we have:

7 P ( * ) ~ — , for d=l AC

and:

l n 7 p ( K ) x ^ , for d = 2

for some constants cp and dp.

We use the notation ap x bp to mean that 0 < c\ < ap/bp < C2 < oo for some constants c\ and C2 when p is large enough.

Page 76: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 67

Because of the convexity of 71 (K) = r+(n) as a function of n, we need only to consider large values of K. In particular, full intermittency does follow from the proof that r+(n) > 0 for K > 1. Equivalently, we consider the eigenvalue problem:

AV> = eri(a;)V> = Exp (111.23)

and we try to prove that it has a solution E = E(e) > 0 for some strictly positive V> G £2CEd). We set e = 1/K and E = A/e where we use the notation A for the energy of the original eigenvalue problem for the operator H = H(K). The goal is thus to prove that E(e) > 0 for e <C 1. We first assume that the covariance Ti has finite support, i.e. r\(x) — 0 if \x\ > R for some positive number R. If we compute the Fourier transform of both sides of equation (HI.23) we get:

-H<p)Hip) + e J2 Y^e^^x) = EJ>(<p) \x\<R

or in other words:

\x\<R ^ W)

where, as before we used the notation:

d

*(¥>) = 2£( l - cos tx7 t - ) 1=1

for ip = (<pi, • • •, ipd) £ Sd and d = 1 or d = 2. Consequently, taking inverse Fourier transform we get:

*w = pip £/•<»> (/„ i ^ * ) •(»). w s «. and for the eigenvalue E = £"(e) we have the dispersion relation:

det ([6x%y - e /(£ , x - y)Y1(y)]Xiy) = 0

where:

1 / e**v

Page 77: Parabolic Anderson Problem and Intermittency

68 RENE A. CARMONA AND S.A. MOLCHANOV

Let us remark that:

and that we have in fact:

lim I{E,z) = 0 E-+00

and:

I(E, z)~-j= for d = 1

I(Eiz)~c\og-= for d = 2. E

These facts follow from the expansion:

{27r)d Js* E + $(<p) where:

to'J-^-TSF/*1^* - E ^ 1 /* 1 — cos(z^)

(27r)rf 7S<* E$(<p) Let us denote by xo, . . . , xn the points x of the lattice satisfying |x| < R. Then the determinant A can be rewritten in the form:

A = det ([6ij - (£/(£?, 0) - cI(E, x; - ^ ) ) r 1 (x t ) ] i j ) .

Let us add all the rows with numbers 0, • • • ,n — 1 to the n-th row. We get:

An = (i-6/(E,o)^r1(xfc) + 6^/(^x^xo)ri(xfc) + ...) V k=0 k=Q )

n = 1-c/(£?, 0)5^ri(xib) + 0 ( e ) .

Ar=0

and the determinant can be shown to satisfy:

n A = l-eI(E,0)J2ri(zk) + O(e).

k=o Consequently, if:

Page 78: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 69

2^(0) = J2 ri(xfc) > o, k=0

then the equation A = 0, (at least for d = 1 and d = 2) takes the asymptotic form:

1 _ c f l ( 0 ) - ^ + O(-^) + O(e) = 0

which gives:

2J1

for d = 1 and:

which gives:

and consequently:

E = E(e) ~ cTiiOye

1 - cet, (0) log | + 0(e log 1 ) = 0

l o g -^ erx(0)

£7 = E(e) - e " ^ 1 ^ )

for d = 2 provided this last equivalent sign is understood in terms of logarithmic equivalents. Notice that here and throughout this work we use the same letter c for a generic constant the value of which can change from line to line, but which does not depend upon the relevant parameters of the asymptotic in question.

We now consider the more complicated case:

n

27rf1(0) = ^ r i ( ^ ) = 0-

In this case, the last row of the determinant has the form:

&n= (l - eJ2 I{E,xk - xn^xk)] . \ k=0 J

Page 79: Parabolic Anderson Problem and Intermittency

70 RENE A. CARMONA AND S.A. MOLCHANOV

Now, for k = 0, • • • , n — 1 we multiply this last row A n by eI(E,0)Vi(xit) and we subtract the result from Ao, • • •, An_i respectively, then we get:

A = det ([6,j - e'l(E,Xi - Xj^Xi) + e ^ J E O J ^ O ) ^ * , ) ] , - , . , )

provided we set:

Hj{E) = J2i{E,zk-zj).

It follows easily that equation A = 0 can only be satisfied if I(E, 0) = 0 ( l / e 2 ) and in this case one has:

A = det {[Sij + e2#,(£)/(£,(OrxCz.Oki) + 0(e).

Notice that:

HAE) = -l-iyr1(xk)[ 1 -cos{xk-*i)ip d<p

1 / A ri(afc)e^-«i)» {2n)d Js*f^ E + $(<p 9

{2*)d Is* E + $(<p * and if we use the fact that:

n

det ([8ij + aibj]ij) = 1 + ^2 akbk

then we get:

(27r) foJsd E + Hf) 1 r r\(y)2

\2n)dJs*E + *(p) '-"''^J^LTTW)^0^ Notice that:

Page 80: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 71

l™ / S 1 ^ , x dip = / ^ ^ - r i f x j b ) cfy> = a ( r - 1) E\oJs*E + Q{<p) r Js* *(*>) ; V

and consequently the equation A = 0 has (asymptotically) the form;

c2

which gives:

for d = 1 and:

which gives:

1 = c a l r , ) ^

E = £(e) ~ ce4a(ri)2

1 = c a ^ )e2 l o g - |

£7 = E(e) ~ e - c / ^ r i ) £ 2 )

for d = 2 where (as before) this last equivalent has to be understood in the sense of the logarithmic equivalents.

Remarks: The above proof concerns the asymptotic e \ 0 when the size of the support, i.e. R is fixed. On the other hand, the fact that the function Ti is a correlation function was not used. We only used the fact that the function I \ has finite support and that fi(0) > 0.

We now consider the general case:

£ |rx(x)| < oo. (111.24) x£Zd

We shall still assume that:

£ 1*1 (*) > 0. (111.25)

Page 81: Parabolic Anderson Problem and Intermittency

72 RENE A. CARMONA AND S.A. MOLCHANOV

Lemma III.1.9 Under condition (IV. 18) one has:

E(e) < ce2

for e small enough as long as d = 1 or d = 2.

Proof: Using Holder's inequality we get:

m(t) = E0{ec/o iri(*OI*}

< n m0{eelSfo6^x^ds}\r^)\/s

xezd

provided we set S = Y^X£Zd l^i(x)l- If d = 1 we have:

and this implies that:

E(e) < ce2 ( £ | r x(«) | ] .

If d = 2 a similar computation gives:

E ( e ) < e - c / ( ( ^ 6 ^ l r i ( l ) l ) 2 £ ) .

We summarize the above results in the following:

The solution of the Anderson's parabolic problem is asymptotically intermittent if:

d = 1,2 or (d > 3 and K < K2,cr)-

Page 82: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 73

IIL1.6 A s y m p t o t i c Behavior of the Critical Diffusion Constant

It is possible to use formulas (III. 17) and (HI. 19) to get upper and lower bounds on the critical value

Kp,cr of the diffusion constant in dimension d > 3. One easily gets!

^ < «P,cr < (2[|] - l)K2lCr.

In other words:

Kp,cr - P , P - > OO.

In fact, it is possible to prove a sharper result, namely that KPiCr/p converges to a finite limit. First we notice that Proposition III. 1.5 can be used to get more precise estimations of the critical values of the diffusion constant. Indeed, as a straightforward consequence one gets:

Proposition III.l.lO If p is of the form p = Y%=i PiPi for some ft > 1 and pi > 2 satisfying pi ^ pj if i ^ j> then we have:

KP}Cr < m a x 7«Pi,cr- (III.26) » Pi - 1

It follows from (111.18) and (111.26) that (for d > 3)

But we also have:

Proposition III.1.11 Let us define the numbers by the formula:

Then one has:

K = l iminf TKVCT-P-KX> p _ l y'

l i m -KPyCr = * • p—•oo p F*

Proo/: Let € > 0 be arbitrarily small and let us choose po such that:

Page 83: Parabolic Anderson Problem and Intermittency

74 RENE A. CARMONA AND S.A. MOLCHANOV

K Po >cr ~ __ p o - 1

< K + 6.

Notice that:

implies that:

and

7PO-I(*) < 7 P O ( « )

KPQ-I(K) < 7po(*) Po - 1 ~ po = 1

7 P O , I ( K ) 7PO-I (* ) < 7PO(*)

Po - 2 (po - 2)(p0 - 1) ~ po = 1'

Consequently, if one chooses a > 0 so that a < po^, then one has:

Po - 2 po - 1

We now choose pi large enough, and since any integer p > pi can be written in the form:

P = PiPi + AKPo - 1)

for some /?i,/?2 > 1> then one has:

hmsup = hmsup — < K. p—KX> P "" 1 p—KX> p

which completes the proof. I

In other words, the critical values KPiCr increase linearly (at least in the asymptotic regime p —• oo) as a function of the order p of the moment. Notice that this p is also the number of particles in the deterministic Schrodinger operator whose supremum of the spectrum is the moment Lyapunov exponent. This leads to the following interpre­tation of the constant l/7c: it is the minimal coupling energy (for large clusters) which guarantees the stability of the cluster under heat (diffusive) perturbations.

Page 84: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 75

I I I . 1.7 S u m m a r y

The results derived above are summarized in the figures 1 and 2 below.

i

(p-i)r(0)/2

r(0)

r(0)/2

i

Vf>(K)/p

L Y3(K)/3 \ .

— •

Figure III .1.3 Plot ofjp(n)/p ELS a function of K in dimensions d = 1 and d = 2.

Page 85: Parabolic Anderson Problem and Intermittency

76 RENE A. CARMONA AND S.A. MOLCHANOV

J

(p-i)r(0)/2

r(0)"

r(0)/2

i

*>(K)/P

1 V3(K)/3 \ ^

k 7^0/2

1 X2/cr X3pr K3cr

^^

Figure III.1.4 Plot ofyp(n)/p as a function of K in dimensions d > 3.

Consequently we have:

o i f d = l o r d = 2, full intermittency o if d > 3

full intermittency for K in [0,K2,cr) intermittency for the moments of order p > 2 for K in [/C2,cr> K3,cr)>

moments of order greater than or equal to p for K in [KVICT, Kp+i,Cr)-

The dichotomy between the case d = 1,2 and d > 3 and small K on one side and the case d > 3 and large K on the other one, is typical of the predictions of Anderson's localization theory in disordered media. See [6] for example.

Notice that, in the case of Stratonovich equation, all the graphs appearing in Figure 1 and Figure 2 have to be shifted in the vertical directions by r i (0) /2. In particular 7i(K) = r!(0)/2.

Page 86: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 77

IIL2 The Case of Finite Correlation Length

We now consider the case where the field Ct(x) ls differentiable in time. Our interest concentrate on the time derivative &(x) which is a mean zero homogeneous Gaussian field the covariance of which is a tensor product:

r((«,*)>(*,y)) = r0(«-*)r1(ar-y), M > o , x,ye%d.

One usually assumes that the temporal part of the covariance is an integrable function. Throughout this section we shall need the following stronger condition:

/

+oo \T\TX{T) dr < oo. (111.27)

-oo

The existence of the moment Lyapunov exponents is more difficult than in the case of the potential of the white noise type. This existence can be proved in the case of the problem (1.5) as well as in the case of the problem (1.20). Of course, the functional form of the Lyapunov exponent will depend on the particular form of the equation, but the existence will not.

We choose to work with the equation (1.20) for the sake of definiteness. The Feynman-Kac representation of the moments of the solution takes the form:

»#>(*, .1 , - ,* , ) = E ^ ^ i e ^ o / ^ ^ - ^ E ^ r . ^ - ^ ^ ^ ^ ( n L 2 8 )

where the X\ are for i = 1,2, •••,£> independent standard continuous time random walks on 7L with generator A. Recall that, when the random potential is a white noise in time, a duality argument can be used to get:

Under these conditions, one can get the variations of 7p(o-) from the variations of 7 P ( K ) . The following picture gives these variations:

Page 87: Parabolic Anderson Problem and Intermittency

78 RENE A. CARMONA AND S.A. MOLCHANOV

Figure III.2.1 Variations of jp(cr)/p in dimensions d > 3 when the potential is a white noise in time.

The main result of this section is the following.

Theorem III.2.1 For any integer p > 1 and for any a > 0 the limit:

j (a) = lim - In mp(ty xx, • • •, xp) r t—+oo t

exists and defines a nonnegative increasing continuous convex function 7p(cr). More­over, 7p(0) = 0 and:

lim ^ ^ = 1^(0)1*1(0)?:-. (111.29)

Notice that Jp(cr)/p x p/2 and not (p— l ) /2 as one should expect from the asymptotic of 7 P (K) for AC —* 0 which we gave in the previous section. What could be considered as a contradiction is due to the fact that we worked with Ito's form of the stochastic integral equations and that in the Stratonovich case we have an additional shift 1/2 and £fi + ! = §!!!

Page 88: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 79

I I I . 2 . 1 E x i s t e n c e of 7p(<r)

Let us fix a large constant T > 0 and let us set t = kT for some large integer k > 1. It is easy to see that:

a * ft I I r0(ti -13) £ riC t? - *«?) d*id*2

7o Jo ,.~ „ * rhT rhT P ... ...

= 2 £ I iw 1 iff Fo(<1 " h) E r i ( ^ " ** > dhdh

+ RP(T,k)

where the remainder term i2p(T, k) can be easily controlled by:

* /»oo /»oo

|J2p(T,*)| < 2k- / r 0 (< i -<2 )p 2 r i ( o )d f i* 2 -6 JO JO

< rx(o)*V/ kl|r0(r)|dr < fee

if we set 0 = Ti(0)crp2 / |r | |ro('r)| dr. Consequently, one has:

e-ME { ef Et-a /<*-X,TCi)Tro(«t-*»)IXi-i W ? - * ^ ) <"><"* < m p ( t , i 1 , - - - ,x p )

< e + M E{e*£*-» C - ^ r C D r r o ^ - ' ^ E U r , (X«-J r«) A W ^ 3 Q )

In order to estimate the above expectation we introduce the following symmetric kernel. For each x = (21, • • •, xp) € TLvd and y = (yi, • • •, yp) £ 7Lpd we set:

«r(x,y) = E,{«,(XT)ef J i ' ^ r o f e - D E U ^ ^ - ^ ) * ^ } Notice that:

0<<zr(x,y) < e*IPx{tf(r) > |x - y|}

Page 89: Parabolic Anderson Problem and Intermittency

80 RENE A. CARMONA AND S.A. MOLCHANOV

provided |x — y| > T2. Indeed:

W{N(T)>n} < Me~an{eaN^}

= i n f e - ^ e ^ 0 - 1 ) a

_ e - n log n+n log T-T

provided n >T2. This implies that:

J2 <Zr(x,y)<oo

and consequently that the kernel qj defines an operator QT which is bounded on the space ^°°(2Zp(i). Using the Markov property, the bound (III.30) can now be rewritten in the form:

c" w [Qr l ] (x) < mP{t,x) < eke[Q$l](x). (111.31)

The following result is a straightforward generalization to infinite dimensional spaces of a classical result of the so-called Perron-Frobenius theory of matrices.

Lemma III.2.2 If a nonnegative self-adjoint operator, say Q, on ^°°(^m) is given by a positive kernel q(x)y), then for any non negative and nonidentically zero function rp(x) on -2/™ and any x G Z"1 one has:

lim j\og[Qk4>]{x) = r+(Q)

where we use as before the notation r + (Q) for the spectral radius of the operator Q.

With this in mind we get on one hand:

w ; / < lim inf - log mp(t, x) < lim sup 7 log mp(ty x) < W , l ' (111.32)

but on the other hand we also have:

e-^r1{o)fol^r^-t^dt^ < mp(t ,x) < e2p 2 r i ( 0 ) /o7o iro(ti-t2)l*i*a

Page 90: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 81

which in turn implies that:

- f p 2 r i ( 0 ) | | r | | r o ( r ) | d r

< lim inf - log mp(t, x)

< lim sup - log mp(t, x) t->oo i

< + ^ P 2 r i ( 0 ) / | r | | r o ( r ) | d r .

This implies that the limsup and liminf appearing in (HI.32) are finite. Since their difference is smaller than 9/T one gets that the limits:

lim - log mp(t, x) = lim T

exist and are equal to a nonnegative number which we shall denote by 7p(<r). Moreover the second estimate implies that JP((T) = 0(cr) a s c - * oo. Let us now prove the last claim. Notice that:

logmp(t>x) = logEx{e<ri4t}

if we set:

2 Jo Jo . j ^ This implies that log mp(t ,x) is a convex function of a. Indeed:

da & x ' E x { e ^ « }

and: - ^ l o e F f,-"H - ^Ae^}^{Ale^}-JEx{Ate"A^ da* g xi ' " E,{e^*}a

by Schwarz inequality. Since the limit of a locally uniformly bounded family of convex functions is a continuous convex function we conclude that the limit JP(<T) is also a continuous convex function of a. Moreover, the main estimate of the existence proof implies that:

7 (AT) x cpa

Page 91: Parabolic Anderson Problem and Intermittency

82 RENE A. CARMONA AND S.A. MOLCHANOV

for some positive constant cp because we saw that lp(cr) = 0(cr). Finally we remark that:

7P(0) = 0.

IIL2.2 Est imat ion of lp(cr)/p

The purpose of this subsection is to prove the following estimate:

Proposition III.2.3 There exist positive constants c* such that for each integer p > 1 and all a > 0 one has:

- z 7P(g) . + 2 a < — < <r ^ c' ap.

The most important consequence of this estimation (compare with Figure 3) is the positivity of 7p(cr) for all a > 0 and p > 1 in any dimension d > 1.

Proo/; The upper bound is trivial. Indeed, the obvious bound:

mp(t,x)<e^2r>WfF°Mdt

implies immediately that:

<\^(o)J\rQ(t)\dtp<T. p

The derivation of the lower bound is more involved. Jensen's inequality gives:

rt ri P

(111.33)

m. , ( < , x ) > e x p [ | j f J T0(h-t2) £ E « { r 1 ( x g ) - x g ' ) ) } c f t 1 A 2 ]

and the problem reduces to the estimation of the expectation:

Notice that for every element <p = (<£>i, • • • , >d) of the d-dimensional torus Sd = [—7r,+7r)d one has:

Page 92: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 83

provided we set: d

*(*>) = £ ( ! - « * p*)-

Consequently, for i ^ j we have:

J 5 d

which is independent of i and jf. i(cfy>) denotes the spectral measure of the spatial part of the potential field (i.e. vi(dip) is the Fourier transform of the nonnegative definite function Y\ on the lattice 7Ld). Now if we assume that this spectral measure is continuous and if we set:

H1(t)= I e~2i*M dui{(p) (111.34) Jsd

then H\{t) —* 0 as t —+ oo and:

rt rt [ I To{ti-t2)H1{tl+t2)dt1dt2

Jo Jo

which is bounded in t as t —• oo. We now consider the case i = j . A similar calculation gives:

JSd

= Hx[\tx-t2\)

which is independent of i. Moreover, as t —* oo one has:

rt rt n r+oo

/ / r 0 (* i -< 2 ) f f i ( |* i -<a | )* i t fa ~ U I °° YQ{r)Ex{\r\)dr JO Jo * J-oo

by Parseval identity. The definition (III.34) of H\(t) implies that:

Page 93: Parabolic Anderson Problem and Intermittency

84 RENE A. CARMONA AND S.A. MOLCHANOV

which gives:

J J Mh -i2)H1(\i1 -i2\)dt1dt2 ~ c ^ where the constant c is given by:

Is* W&y+l? J-oo Js* 4$(CP)2 + A2

Consequently, as t —• oo one has:

log mp(t, x) > -p(p - l)O(t) + pc—-Z Z7T*

which concludes the proof. I

The above proof, together with the existence proof, contains more information than the statements of the results given above. Indeed, if one sets:

then the above proofs actually show that:

Theorem III.2.4 For any integer p > 1 it holds that:

1VW) = Tlim T^V) T— oo

and moreover:

T for some positive constant C independent of p.

l7»-7fV)l<C^

Important Remark There is an essential difference between the white noise case and the finite correlation case, especially in dimensions d > 3. Indeed, 7p(cr) = 0 for 0 < a < l/KPyCr (recall

Page 94: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 85

Figure 3) in the white noise case. But if in the finite correlation time case, then 7p(<j) > 0 for all a > 0 and all p > 1.

It is possible to obtain upper and lower estimates for 7p(<r) in the case e > 0.

Proposi t ion III.2.5 Ifd= 1,2 then we have full intermittency for all a > 0 since:

0 < * ( * ) < ^ < - .

If d > 3 we have full intermittency for all a > a2yCr for some critical value (T2,cr > 0 of the coupling constant and:

T x O r ) - 2 ^ , 0<cr<(7 2 , c r .

The proof of this result is much more difficult than in the 6-correlated case.

We close this section with a natural conjecture:

there should exist a decreasing sequence 0"2,cr > &z,cr > • • • > 0>|Cr > • • • which converges to 0, i.e. such that aPiCr —* 0 when p —• oo, and such that we have the behavior given in Figure 4

which is a natural generalization of the phenomenon contained in Figure 3 for the 6-correlated case).

Page 95: Parabolic Anderson Problem and Intermittency

86 RENE A. CARMONA AND S.A. MOLCHANOV

i k

Oj/r

K>(oM>

G3/r

/ Y»(o)a/

1

Gift

V(a>2 /

/ Vi(a)

0

Figure III.2.2 Variations of ip(cr)/p in dimensions d > 3 in the case of finite time correlation.

III .2.3 Continuity Resul ts

We now come back to the dependence upon epsilon of the properties of the moment Lyapunov and we try to prove continuity results for them. So we assume that:

T^(s,t,x,y) = -T0(^—W*,l/) € €

for some small parameter e which we let go to 0. It is possible to give asymptotic formulas for 7p(cr) in the regime e < l . Generalizing a notation already introduced we define the operator Qp 'c) on £2(JL ) by its kernel:

g<r*>(x,y) = E0{e*/o /o r o f o - t r f l W , - * „ ) * , * , , x% = y}pr{x>y)

where we used the notation pt(x>y) for the transition density of the standard con­tinuous time random walk on TLd. We denote by Xp 'c' the largest eigenvalue of the operator Qp . The latter is positive (and hence isolated from the rest of the spec­trum) for all cr > 0 in dimensions d = 1 and d = 2 and for cr > l/KPiCr if d > 3. Moreover:

Page 96: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 87

Consequently, one can use classical perturbation theory of the discrete spectrum to get:

T h e o r e m III.2.6 One has:

lim7icV) = 7P(M) e\oip

for all a > 0 if d = 1,2 and for a > l/KP}Cr if d > 3. We use the notation e = 0 for the case of a random potential which is a white noise in time. Moreover, one can write an asymptotic expansion:

7{P

c)(<r) = ¥;K<r, 0) + eCl(p, a) + e2c2(p, a) + • (111.35)

We do not give the details of the proof. They are very similar to those of the proofs of Theorem III.2.1 and Theorem III.2.4. As we saw earlier, the coefficients ct(p, a) of the above expansion are given in terms of the eigenvalues and the eigenfunctions of the corresponding p-particle Schrodinger operator with pairwise potential Ti(xi — £2). It is interesting to notice that these eigenvalues are on degenerated for a > l/KPfCr\ The expansion (III.35) is easier in the case p = 1. Indeed, in this case, the coefficients are given in terms of the moments of spectral measure of To-

III.2.4 Lyapunov Exponents as Functions of /c

As we already explained, there is no simple formula linking the Lyapunov exponents 7P(AC) and 7P(<x) of the equations:

du du ,— — = KAu + £t(x)u and — = Au +y/a^t(x)u.

Separate proofs are necessary to derive the existence and the properties of 7 P (K) when the time covariance is not a delta function. Recall that in this case one has:

mp(t, 0) = Eo{e^ /o So ^^)i^>J=1 W ^ > ) ^ * ( m M )

are independent nearest nei generators KA. The existence of the limit: where the Xs are independent nearest neighbor continuous time random walks with

Page 97: Parabolic Anderson Problem and Intermittency

88 RENE A. CARMONA AND S.A. MOLCHANOV

7P(/c) = tlirn - logm p (0)

is proved basically in the same way as for 7p(c). We do not repeat the argument. The upper bound is also similar. Indeed:

, ( « > 0 ) < e ^ r i ( o , / - l r « W I *

which implies that:

mv\

^ < c + f (111.37) P 2

for some positive constant c+ depending only on the covariances To and IV Moreover, in the same way as before we prove that:

^ > \ [+0° | r 0 ( r ) |dr / e - ^ M * ! ^ ) p I J-oo JSd

= ^/rt r° ( A ) ^ +24X)^w

which is a positive constant. This proves the lower bound. This implies, among other things that:

> — when K —•oo. p K

Finally we argue the convexity of the function 7 P (K) /K . Using the properties of the continuous time random walk one gets:

(*,0) = y > - w ( H ^ E o { e U 7 o ' r o ( t ^ k=0

oo

= e-2dK*Y,<*k{t)« k=0

where the positive numbers a*(<) do not depend upon AC. Indeed, once the number of jumps is known to be equal to fc, the instants of jumps are independent and uniformly distributed over the interval [0,t], and since the distribution of the sites actually visited

Page 98: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 89

by the random walk are independent of n, the above conditional expectation does not depend upon K. Consequently:

- log mp{t, 0) = -Idpn + -Gt{9)

if one sets K = e and:

Gt(0) = log (J£ak(t)ee

\k=o / Notice that, for each fixed t > 0, Gt{9) is a monotone convex function of 9. Indeed:

and consequently:

l_ _ (Zr=o"fc(')egfe) {J2?=0k2«k(tyk) - (EVLokak{t)eeky dS2 {)~ (mo«k(iyk)2

which is nonnegative by Schwarz inequality. Moreover equality holds if and only if ajs(t) = 0 for all k > 1 which is not possible in the present situation. Consequently we proved that:

d2 d2 (\

and that: ^ G ^ = ^ G i o g m ^ ° 0 > o

(j log m„(t, 0)) = -2dpe' + j-QGt{9) _d_ (\ d6

where the second term in the right hand side is positive. Hence we proved that:

7 P (K) = lim logmp(t,0)

is a continuous function of K which can be written in the form:

^ ^ = -2cfc + fc(logK) P

where the first and the second derivatives of the function h are positive.

Page 99: Parabolic Anderson Problem and Intermittency

90 RENE A. CARMONA AND S.A. MOLCHANOV

The following is a summary of what we proved above.

Proposition III.2.7 The moment Lyapunov exponents 7 P (K) exist for all K > 0. They satisfy:

c — <

K 7P(*) <C+P

P ~ 2 for some positive constants c*. Moreover JP(K)/P is a continuous function of K which can be written in the form:

7P(K) = -2dp/c + hp(\n K), where the function hv satisfies:

d dn

HP(K) > 0, and d2

Finally, the function JP(K) is real analytical for small K and it is of the order 0(1/K) when K —• +oo.

We conjecture that the behavior of JP{K)/P is given by the following Figure 5.

i

pa/2

a

a/2

1 YiOO

K2/r

" - ^ ^ [ — i

K»/r — — — •

K

Figure III.2.3 Conjectures variations of 7P(/c)/p in di­mensions d > 3 in the case of finite time correlation. We used the notation a = fo(O)1/2

Page 100: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 91

We can only prove part of the corresponding mathematical statements. Notice also that it is possible to give for K —> 0 an asymptotic expansions for JP(K)

Remark: So far we assumed that a2 = ro(0) > 0. The case To(0) = 0 is not only possible, but it is in some sense typical in many physical applications. It is possible to prove that in this case (for e > 0), 7i(«) > 0 for all 0 < n < oo. But since 71 (0) = 0 the behavior of 71 (K, e) must be of the type given in the Figure 6.

Figure III.2.4 Conjectured variations of 7P(AC)/P in the case of finite time correlation when ro(0) = 0.

Indeed, it is possible to show that 7I(K, e) ~ e/(/c) for some positive function / satis­fying:

/ ( K ) ~ — , K —» OO K anc / ( K ) ~ C'K, K —> 0.

Page 101: Parabolic Anderson Problem and Intermittency

92 RENE A. CARMONA AND S.A. MOLCHANOV

The study of the highest Lyapunov exponent is of particular interest. See [16] for some applications to oceanography.

III.2.5 Another Explicitly Solvable Model

This subsection is devoted to the complete analysis of an explicitly solvable model of a random potential function which is not a white noise in time. The time dependence is simple: the potential is a piecewise constant function of time. More precisely we assume that there exist a sequence {£n; n > 0} of identically distributes mean zero ergodic Gaussian fields and a positive number r for which:

£t(x) = -^tn(x) if nr<t<(n + l)r . (111.38) V r

The factor 1/y/r is chosen for convenience. As usual we denote by T\ the spatial covariance of the potential, i.e.

<Zn{x)Zm{y) > = <§m,nri(x - y), n, m > 0, x,y e *%*.

Throughout this subsection we shall assume that the fields £n are delta correlated in space, namely that:

I*i(x - y) = <5(x - y).

Obviously, r can be interpreted as the time correlation length of the model. We shall prove below that, as announced in the introduction, the relevant parameter is the product ACT.

Let us denote by 5(0, r, x, y) the fundamental solution of the parabolic equation in the interval [0, r]. As usual, the Feynman-Kac formula can be used to express it in terms of the continuous time random walk Xs with generator KA:

9(0, r, x, y) = W.x{6(XT)eK 6><*'>*>.

We shall make use of the following fundamental remark: the fundamental solutions q{nr, (n-fl)r •, •) are independent (over the probability space of the random potential) and identically distributed. Notice that the random potential is not stationary in time since it is piecewise constant. It is nevertheless homogeneous with respect to the time-shifts by integer multiples of r. This last fact, together with the constancy of

Page 102: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 93

the potential over the intervals [nr) [n + l ) r ] implies tha t the first moment m\(t, x) is independent of x, i.e.:

mi( t , x) =< u(t, x) > = mi ( t ) , t > 0

and the limit:

exists and is equal to:

Moreover:

mi

7i («) = Hm T k>g mi (i) I—-KX) t

7i (K) = lim — l o g m i ( n r ) n—KX> n r

= lim — l o g m i ( r ) n

n-+oo n r

= - l o g m i ( r ) . 7"

(r) = <E 0 { e ^J> ( * s ) d *}>

= < E0{exp[J= £ &(*Kr(*)]> >

where:

lt{x)= f8x(Xs)ds Jo

denote the total time spent by the random walk at the site x before time t. Using Fubini's theorem to interchange the two expectations and using the formula for the Laplace transform of a Gaussian random variable we get:

mi (r) = Eo{exp[i : < ( £ &>(*)M*)J >]} \xezd }

= E0{exp[i: £ £T(xf]} xezd

Page 103: Parabolic Anderson Problem and Intermittency

94 RENE A. CARMONA AND S.A. MOLCHANOV

because we assume that the fields £n are delta-correlated in space as well as in time. Obviously one has:

E *r(«) = ' xezd

and consequently:

and:

E w 2 = s

J- V £T(x)a = I. x£2Zd

We shall use the following elementary result, the proof of which we omit.

Lemma III.2.8 If £ is a random variable satisfying 0 < £ < e, then:

E{t} = l + (l + 0(e))E{t}.

This implies that, for r \ 0 one has:

7[T\K) = i l o g E o { e x p [ ^ E ^r(x)2]} T IT *—'

x£2Zd

But:

Eo{^T(x)2} = E 0 { / T f 6x(Xs)6x(Xt)dsdt] Jo Jo

= 2 / / JEo{8x(Xs)6x(Xt)}dsdt J J0<S<t<T

= 2 / / ^(CaOpt-aOcjXjdsdt y J0<S<t<T

= 2 / / ps(0,x)pt-5(0,0)dsdt ^ Jo<s<t<r

Page 104: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 95

where we used the notation p\ (x,y) for the transition probability of the continuous time random walk with generator KA. After summation we get:

EE0{ £ ZT{*)2} = 2 f\r - s)ps(0,0)ds.

Consequently, we proved that, uniformly in the diffusion parameter AC, one has:

7iT)(«) = \ j \ r - s)pa(0,0) ds (1 + 0{r)). T* Jo

Recall that:

where: »<0-0> = < s ? / . . « " , w *

$(varphi) = 2 ^ ( 1 — cos^,) = sumf^ sin t = i

d „;„2 W 2

It follows from (111.39) that:

p,(0,0) = 1 - 2dKs + 0(K2S2)

for AC <C 1, and consequently, in any dimension d > 1:

7iT)(«) = \ - ^ [ s{r - s) ds + 0(T*S>)

and hence:

T OO = £-£"• +O(rV)

(111.39)

(111.40)

So, for r \ 0, not only the continuity property:

7iT)(«) - 7i°V) for AC restricted to any fixed interval [0, ACO], but we also have the same convergence for AC restricted to intervals of the form [0, Ofr""1]. In any case, one should have a different

Page 105: Parabolic Anderson Problem and Intermittency

96 RENE A. CARMONA AND S.A. MOLCHANOV

limiting regime when KT —• oo and K ^> 1/r. In this case, the asymptotic behavior of the first moment Lyapunov exponent depends on the dimension d as we are about to show. Using the Laplace method one to estimate integrals of exponentials (recall equation (III.39)) one sees that:

p'(°'°>~(i^F7I <""»>

for KS >• 1. In this case one has:

7 I( T ) (K) ~ l r 2 [T TPs{0)0)ds

Jo

= — / P?\0,0)d8. TK JO

Consequently:

In the regime KT 1 one has:

7 J T ) ( K ) ~ - 4 = , for d=\, y/KT

(T), , c2log(nT) 7i ' 0 0 j±p , for d = 2,

T i ( T ) ( « ) ~ ( K rC ^ - 2 . for ^ = 1 ,

In either case, the graph of the first moment Lyapunov exponent has the form given in Figure III.2.5 below.

Page 106: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 97

Figure III.2.5 Sketch of the variations of 7} («).

We close with the following important remark. When the random potential £t(x) is delta-correlated in time one has 71 (K) = 1/2 when the fundamental equation is understood in the Stratonovich sense (and the space correlation is also given by the delta function). This corresponds to the case r = 0. But we just proved that, when r > 0, then:

lim J[T)(K) = 1/2

lim J[T\K) = 0 KT/*OO

in all dimension d > 1 !!!

Page 107: Parabolic Anderson Problem and Intermittency
Page 108: Parabolic Anderson Problem and Intermittency

Chapter IV

ALMOST SURE LYAPUNOV EXPONENTS

This chapter is devoted to the analysis of the almost sure asymptotic behavior of the solutions of the fundamental equation (1.1). We first prove the existence of the almost sure Lyapunov exponent when the initial condition UQ(X) is localized. This Lyapunov exponent is shown to be independent of the initial condition and to be bounded from above by the moment Lyapunov exponents. Most of our efforts are devoted to the proof of a sharp asymptotic estimate when the diffusion coefficient K tends to 0.

We assume throughout this chapter tha t the random potential {&(#)} is delta corre­lated in time, or in other words, that the antiderivatives {Ct(x); t > 0} are Brownian motions.

IV. 1 Existence

We now consider the more difficult problem of the existence and the analysis of the almost sure Lyapunov exponents. As before u ( t ,x ) denotes the unique solution of the problem (1.5). Unfortunately, some of the proofs of this chapter require tha t the initial condition uo(t) has compact (i.e. finite) support. We believe tha t all the results still hold for the initial condition uo(t) = 1 but a complete proof is still escaping us. For each 0 < s < t and x, y £ TL we consider the quantity:

99

Page 109: Parabolic Anderson Problem and Intermittency

100 RENE A. CARMONA AND S.A. MOLCHANOV

q(s,x,t,y) = E{ei>"(* ' + ' - Q > d * |X s = * , * , = y}pt-a(x,y). (IV.l)

The above expectation is computed for each (fixed) sample realization of the potential random field Ct(x)- We shall sometime use the special notation:

q(s,x,u,z) = gK>(s,x,ti,z)

to emphasize the dependence of q upon the sample realization of the random field £. The definition of q involves an expectation over the sample paths of the continuous time random walk on TLd when the paths are conditioned to be at x at time s and at y at time t. Recall that the notation p*(x,y) is used for the transition probability of this random walk. Notice that, by definition of the conditional expectation, we have:

J2 us(y) q{8,x}t}y) = ^{ug{Xt)eH^^^'^)da\Xs = x} yezd

which the solution u(tyx) at time i when this solution is equal to us{ • ) at time s. Consequently, g(s,x,tf,y) so-defined can be regarded as the fundamental solution of our initial value parabolic problem. Indeed, when the potential field is actually the derivative of a differentiable function of ty i.e. when £t(x)di = d£t(x), then it is easy to check that the function q(s} x,f, y) is the solution, for each fixed s > 0 and x £ TL of the equation:

M ^ = «A,,(, lilt,»)+6(y)«(«.*l*,»)

with initial condition 9(5, x, 5, y) = 8x(y). Notice that this fundamental solution is random. We are concerned with its almost sure properties. The following "semigroup" property follows easily from the Markov property of the random walk:

Sf<c>(s,z,u,;0= ] T « ( c - + ( « - 0 ) ( s l x > t , y ) g ^ - - ( * - ) > ( * J y J t i > z )

yezd

where we used the notation £. +^ for the function (£. +p)t(x) defined by:

(C •+/?)<(*) = Ct+/K*)-In any case, if x is fixed, one has:

g ( c )(s ,x,u,z) > g(c +("-*))($, x,t,x)g ( c- -<'-•))(*, a:, ti, a:).

Page 110: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 101

and:

« (C)(s,*,ti,*) > qlC){8Jx,t9z)q^(tiZtuty). (IV.2)

whenever s <t < u. Consequently, it is possible to use the stationarity in time of the random field £ and Kingman's subadditive ergodic theorem to conclude that almost surely over the sample path of £, for all x £ TL and s > 0 the limit:

]nq(,,z,t,x) =

t—oo t IK ;

exists. We call this limit the almost sure Lyapunov exponent of the problem. Using the super additivity (IV.2) in several forms, it is easy to check that the limit:

lim 7 lng(s ,x , f ,y )

exists for all s > 0 and all x, y £ TLd and that this limit is still equal to 7 (K) . In other words, this limit does not depend upon the choices of s, x and y. The Lyapunov expo­nent 7(K) as defined corresponds to an initial condition with a support concentrated on a point. It is easy to check that, if the support of the initial condition uo(t) s finite, the limit:

lim - log u(t.x)

still exists and is equal to J(K). We suspect that this is also true for the initial condition u0(x) = 1.

Unfortunately, it is extremely difficult to obtain any analytic expression for J(K). Obviously, 7(0) = 0 and:

7 0 0 < 7 i O O < 7 2 ( * ) / 2 < . - . • In our analysis of the moment Lyapunov exponents for finitely time correlation Gaus­sian fields, we rescaled the time variable and we considered the Lyapunov exponents as functions of both the coupling constant a and of the diffusion constant K. TWO separate analyses were needed because there is no simple relationship between the corresponding exponents. We are now restricting ourselves to the delta correlated case. We show now that it is possible to take advantage of the duality K <-+ a available in this case to analyze the Lyapunov exponents.

The differential equation:

Page 111: Parabolic Anderson Problem and Intermittency

102 RENE A. CARMONA AND S.A. MOLCHANOV

can be rewritten in the form:

du d(iti) = Au+7^^Kt,K{x))u

or equivalently:

J i = Au + - 7 = | # ( x ) u (IV.3) as v «

provided we set s = Kt and we define the new potential process £ by:

£.(x) = ^ . / K ( * ) . (IV.4)

This potential field has the same distribution as as the original one. As we already pointed out, this is one of the crucial differences between the finite time correlation and the delta correlated case considered in this chapter. The above derivation is only formal but it can be made completely rigorous by using the antiderivative C*(x) a n ^ integral equations instead of differential equations. We study the asymptotic behavior (as t —•> oo) of the solution u(t,x) which we write by means of the Feynman-Kac formula:

t i ( t ,x) = JEx{u0{Xt)eafod^x^} (IV.5)

where we used the notation a = 1/A/K a n d where the expectation is now over the paths of the standard (i.e. K = 1) continuous time random walk on the lattice TLd. Notice that , according to the results of Chapter II, the fact that we are working with this Feynman-Kac formula implies that we are considering the solution u(f, x) = v[s'(t}x) of the Stratonovich equation. The proof of the existence of the almost sure Lyapunov exponent which we gave at the beginning of this section can be easily adapted to the present situation. It gives the existence of the limit:

7(cr) = lim - l o g u ( t , x )

and its independence from the initial condition UQ and the point x. The Feynman-Kac formula (IV.5) shows that the solution ti(t, x) is a nice function of a. Indeed, it is easy to compute the derivatives of t i ( t ,z) with respect to a. One sees tha t the second

Page 112: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 103

derivative is nonnegative. This shows that tx(t, x) and hence its limit j(cr) are convex functions of the coupling constant a.

Remark It is important to emphasize that the convexity argument which we gave above applies only to the solutions and the Lyapunov exponents in the Stratonovich sense. Indeed, the solutions in the Ito and Stratonovich sense are related by the formula:

and consequently one has:

and we cannot conclude that 7 (cr) is a convex function of a.

It is in fact possible to show that:

da "~ and this implies that 7(0") is a monotone increasing convex function of a.

The results of the analysis of the almost sure Lyapunov exponent 7(c) as a function of the coupling constant a can be immediately translated into properties of the almost sure Lyapunov exponent J(K) as a function of K because of the duality relation:

7(*) = *7(-/£)

which follows immediately from (IV.3) and which we already encountered in the dis­cussion of the moment Lyapunov exponents. The following theorem gives the small K behavior of the (Stratonovich) almost sure Lyapunov exponent. It is the main technical result of this chapter.

Theorem IV. 1.1 With probability 1 there exist positive constants c\, C2 and k such that:

* < ftK) < C2l0?ljW«\ 0 < K < K.

l o g ( l / K ) - ' W - log(l/K)

Page 113: Parabolic Anderson Problem and Intermittency

104 RENE A. CARMONA AND S.A. MOLCHANOV

The calculations of the proof use the general idea of [26] but it is important to remark that the result of [26] is only true for discrete time. Indeed, the transition from discrete time to continuous time is not as easy as it could be thought of at first. In particular the small K estimates have a different form as we shall see below. The estimate:

l{K) > C \ /K a s K —* 0

from [25] takes the form:

Q

logl/AC

in the present situation of continuous time fields. The proof is rather lengthy and technical. The lower estimate is proved in the next section and the upper estimate is proved in the following one.

In particular, J(K) is a continuous function of K at K = 0. Obviously;

7 ( * ) < r i ( 0 ) / 2 K > 0

because < u(t, x) >= Ti(0)/2. In fact, it is easy to prove that:

J(K) = 1^(0)72 d > 3, K > K2lCr-

Indeed, if K > K2,cr is fixed, then we saw that the second moment of the solution i r ^ t j x ) is bounded while < ti^)(t,x) > = m\ (t,x) = 1. This implies that one can find, for each e > 0 a 6 > 0 for which JP{ti(7)(f,x) > 6} > e for all t > 0. This implies that 7 ( / )(*) > 0. But since

7 ( / )(«) < 0 we can conclude that J^(K) = 0 or equivalently that J(K) = Ti(0)/2. Rather than stating a lengthy proposition, we summarize the results which we proved above (in the case of random potentials &(z) which are white noise in time) about the variations of 7p(tf)/p and of J(K) as functions of K in a couple of figures. The first figure deals with dimensions d = 1 and d = 2. All the properties of 7(/c) given in this figure have been proved except for the fact that 7(/c) < 1/2. Indeed this strict inequality has only been proved for small K. We conjecture that, if d = 1,2, then J(K) < 1/2 for all K > 0 P-almost surely.

Page 114: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 105

Figure IV.1.1 Plot of the almost sure Lyapunov exponent 7P(«)/p as a function of K in dimensions d = 1 and d = 2.

We also conjecture that, in dimensions d > 3, the variations of J(K) are of the form given in the figure below. In contrast with the cases d = 1 and d = 2, all the statements illustrated by this figure are proved in this chapter.

Page 115: Parabolic Anderson Problem and Intermittency

106 RENE A. CARMONA AND S.A. MOLCHANOV

Figure IV.1,2 Plot of the almost sure Lyapunov exponent 7p(«)/p as a function of K in dimensions d > 3.

More precisely, we conjecture that J(K) < 1/2 for K < kc

correspond to what we can actually prove. As above, the thicker lines

IV.2 Proof of the Lower Bound In order to get a lower bound on the almost sure Lyapunov exponent 7 ( K ) , we first prove a lower bound on the solution t/(f,0) where we assume that this solution is computed with the initial condition UQ = 8Q. Our approach is very much in the spirit of the usual proofs of the lower bounds in the theory of large deviations. Let us fix t > 0 momentarily and let us pick an even integer n = 2k to be chosen later as a function oft and K. We get a lower bound on u(t}0) by averaging only over the paths of the continuous time random walk which have exactly n jumps before time t.

Page 116: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 107

u(t,0) = JEo{60{Xt)efod^x*-')}

> E o { 6 o ( ^ ) e / o ^ ( x ' - ) ; N(t) = n}

= e-2dKt^^JEo{60(Xt)efod^x'-^ | N(t) = n}.

Under the condition that the number of jumps before time t is equal to n, the values Ti, . . . , Tn of the instants of jumps are the increasing rearrangements (order statistics) of n independent random variables uniformly distributed over the interval [0,i]. This gives:

t*(*,0) > e-2dKt(2dK)n f f E 0 { 6 0 ( X t ) e £ " = o I C t - ' ^ ^

where D = {to = 0 < t\ < • • • < tn < t n +i = t} and where the expectation Eo is now over the discrete time symmetric nearest neighbor random walk {Xj\ j > 0} on the lattice TLd. We now restrict the possible instants of jumps . In fact, we consider only the case where the actual jumps take place in n small intervals separated by (n — 1) larger intervals without jumps. More precisely, we choose positive numbers p > 0 and r > 0 in such a way that t = (n — l)(p + r) + p. One should think of p as being much smaller than r. Then we set:

Ij = [(j - l)(p + r ) , (j - l)(p +r) + p), j = 1, • • •, n

and we demand tha t the j-th jump occurs in the interval Ij. We now have:

u(i)Q)>e-2dKt(2dK)n j • / E o { « o ( * t ) e ^ > - 0 ^ Jh Jin

We rewrite the above exponent in the following form:

X > - t ; ( * i ) - C«-«i+1 (*j) = Kt(*>) - Ct-«i(*0)]

+ [C«-«,(*l) - Ct-,(*l)] + Kt-p(Xl) - Ct-p-r(^l)] + [C«-,-r(*l) " Ct-t2(*l)] + [G-t2(*2) " 6-2p-r(*2)] + [Ct-2p-r(-^2) — (t-2p-2r(X2)] + [Ct-2p-2r(-^2) — Ct-t3(^2)]

Page 117: Parabolic Anderson Problem and Intermittency

108 RENE A. CARMONA AND S.A. MOLCHANOV

+ • • • + [Ct-tj(Xj) - Ct-(j-l)(p+r)-p(Xj)]

+ [Ct-(j-l)(p+r)-p{Xj) ~ Q-j(p+r){Xj)] + Kt-j(p+r)(Xj) ~ C*-*J+i (Xj)]

+ --- + [Ct-tn(Xn)-Co(X0)]

so that:

where:

Cj(Xj) = Ct-(j-l)(p+r)-p(^j) ~ Ct-j(p+r)P0)> J = 1, • • • , n

and:

T)j(ij,Xj) = [Ct-(j-l)(p+r)(*j) - Ct - i ; (* j ) ]

+ [Ct - t j (* j - i ) ~ Ct-0- i ) (p+r) -p(^ i )] J = li • • • , n .

Consequently

«(*,0) > e-2dKt(2dKP)n f • f JEo{So(Xt) f[ e&<*>> f[ (- f e^'^Ut-)} Jli Jl<* j= i i= i \ ^ Jli J

> e-2dK\2dKp)nJE0{So(Xt) J ] <&{Xi) I I e">(*'>}

if we use Jensen's inequality and if we set:

1i(xj) = ~ / m^vXj^dtj j = 1, • • •, n.

This gives:

tx(t,0) > e - 2 d * ' ( 2 c t e p ) n I E o { 6 o ( * t ) e ^

Notice that , for each fixed path XoX\ • • • Xn • • • of the random walk, the £j(Xj) 's are i.i.d. normal random variable with mean zero and variance pFi(O), tha t the rjj(XjYs are also i.i.d. normal mean zero random variables and tha t the Q(Xj)ys and the

Page 118: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 109

f]ji(Xji)'s are independent since they involve increments of £ over disjoint intervals. The variance of r)j(Xj)'s is given by:

U) ~ "J? < [Jj Kt-ti-l)(p+r){Xj) ~ Ct-tj(Xj)]

+ Kt-tj(Xj-l) - tt-(j-\)(p+r)-p(Xj)]dtj) >

= i p r ^ O ) -T^Xj-Xj-x)].

Finally, we choose a specific path for the (discrete time) random walk instead of integrating over all the possible paths which start from 0 and return to 0 at time t. Here is the path which we choose. Notice that this path depends upon the sample realization of the random environment £. We first assume that N(t) = n, i.e. the path jumps exactly n times before time t, and Xj = XJ for j = 1, • • •, n where the points xi, . . . , xn are constructed inductively in the following way. We set XQ = 0 and for j = 1, • • •, k we choose Xj among the 2d neighbors of £j_i as the site x which maximizes the increment:

Ct-(j-l)(p+r)-p(*) ~ Ct-j(p+r)(*)-

We then set:

* * + l = Z | f e - i , Xjfc+2 = Xfc_2, > * 2 * - l = * l ) *2fc+l = ZO = 0

to force the path to be back at the origin at time t. Consequently we have:

«(«, 0) > e-2dKt(2dKP)n (±Y e^ i - i c > +£;- i ^

where the (j's and the T -'S are the above Q(Xj)ys and ^-(XjJ's for the specific choices Xj = Xj which we made. Note that the probability space of the continuous time random walk has been disposed with: the remaining random variables Q and rjj are defined on the probability space (Q, T) IP) of the random field £. Even though the Xj's were chosen as random variables on fi, Cj and the rjjt are still independent because of the independence of the increments of (>t(x) and of the particular definition of the Xj's. Each £j has the distribution of y/r max{0i, • • •, 02d} where (0\, • • •, 02d) is a mean zero Gaussian in IR with covariance

Page 119: Parabolic Anderson Problem and Intermittency

110 RENE A. CARMONA AND S.A. MOLCHANOV

determined in an obvious way by the values of Ti(z) for \z\ = 1. The Q are independent and identically distributed. On the other hand, the random variables ry's are independent but not identically distributed. Each r)j has the distribution N(0, [2ri(0) — TI(XJ — X J _ I ) ] / 3 ) for the appropriate choice of the XJ'S which we described above.

Now, for any 7 > 0 and z > 0 one has:

J P M * , 0 ) < c 7 t } < P { e n = i ^ + S ; = i ^ <(Kp)-ne(7+2dK)*j

< p l e ^ E ^ i ^ - * ^ ! ^ > (Kp)^c-*(7+2rfic)tj

n

< («p)-*nc*^+2^)*E{c-*c> }n J ] E{e"27?>} (IV.6) i=i

The conclusion of the proof of the lower bound depends upon the following Cramer type estimate. This result is elementary and presumably well known. We include a proof for the sake of completeness.

Lemma IV.2.1 Let us assume that the random variable X satisfies:

log P{X > i} x -t2 i-^ 00, (IV.7)

and:

log P{X < f } x - f 2 t -> -00 . (IV.8)

Then there exist positive constants po and p\ such that the estimate:

pz + piz2 < \ogE{ezX} < pz + p2z2

holds for all the values of z £ M where p = IE{X}.

Proof: The Laplace method and the assumptions (IV.7) and (IV.8) on the tails of the distri­bution of X imply that:

logE{e**} x z 2 | z | - > o o .

Page 120: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 111

Moreover, the convex function tp(z) = logE{e2*} satisfies ¥>'(0) = fi and <p"(Q) = E{X 2 } - E { X } 2 > 0 because of our assumption on the distribution of X. Conse­quently, the quantity:

\oglE{ezX} - fiz 72

is bounded near the infinities and near the origin. This completes the proof. |

We can now return to the proof of the lower bound. Indeed, it is a simple exercise to prove that the distribution of the maximum of 2d Gaussian random variables satisfies the assumptions (IV.7) and (IV.8) of the above lemma. Since the common expectation of the £j's is a positive number, say a > 0, as a consequence of the above lemma one concludes that:

for some positive constant /?. Consequently the inequality (IV.6) gives:

n P { u ( i , 0) < e7*} < (K p)-*>V(7+2dK)t-nav^+n/?rs2 J J e

p2^ i = l

< e - * * ( ( l / 0 log(«)-7-2<fo+a/v^-/fc-c*)

if one sets p = 1 and n ~ t/r for some r = r(/c) —•• oo. Now, if one chooses z = Z(K) \ 0 faster than r(K)"1 '2 , then the right hand side will be summable if 7 = J(K) is chosen to be smaller than a/r^Ac)1/2. One can now choose:

r = r(«) = (log(l/K))2

and apply the first Borel Cantelli lemma to finish the proof of the lower bound. |

IV.3 Proof of the Upper Bound

The proof of the upper bound relies on a specific procedure of discretization of the time of the random walk. We divide the interval [0,t] into k disjoint subintervals of equal lengths, say h) fy, • • •, h, and we denote the common length by 6 so that t = k8. For each sample path of the continuous time random walk, these intervals are

Page 121: Parabolic Anderson Problem and Intermittency

112 RENE A. CARMONA AND S.A. MOLCHANOV

classified into two disjoint groups. An interval is said to be white if it does not contain an instant of jump of the continuous time random walk Xt and it is said to be black otherwise. We say that T is a nontrivial increment sequence if T = {a?o,xi, • • • ,x n } is a finite sequence with n > 1 of elements X{ € TLd such that |x,| = 1. T = {0} will be called the trivial increment sequence. Notice that the same x can appear several times in such a sequence. If Ti, 1*2, • • *, Yk are increment sequences, i.e.

I\ = {*ifo 13t\ir-•!*!>*}

with \xij\ = 1 then we denote by {1*1,1*2, • • • , I**} the set of sample paths X. of the continuous time random walk which have exactly nt- jumps, say T}j, TJ2, • • •, 7},nt> in the interval I, and for which:

Zt\o = Xrix - ^(t-i)$, «t,i = XT>2 - ^T t l , , £t,n< = Xis - XTin.•

{Ti, T2, • • *, r^} will be called the common skeleton of this set of sample paths. Notice that, on {Ti, 1*2, • • •, !*&}, only the values of the instants of the jumps (and not their number) are free to vary. With this new notation one has:

u(t,x) = £ E « { e x p E / dCt-,(^)]l{r i ,...,r fc}} {ri,-,rk} i=iJIi

= £ E,{l{rii..,rfc} f [ E.{exp[/ dCt-,{X,)]\ r,} {r,,-,rk} i=i JIi

because the sequence of the instants of the jumps is independent of the sequence of the sites visited by the random walk, and because of the independence of the time increments. Now, once the interval /,• and the skeleton T,- in J; are fixed, one has:

/ d{t-s(Xs) = Ct-«(^(t-l)« + xi,0 + *«",1 + 1" Z»',nj-2 + Zi.ni-l)

~ 0-Ti,ni ((^(t-l)5 + xifi + *«",1 H 1" Z;,„i_2 + *i,nt-l) + Ct-T<,n. (-^Ti,,,.-,) - Ct-Ti.n.-, (^Ti,ni_i ) + + Ct-Tii2(XTiil ) - Ct-rtll (*Tj,,)

Page 122: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 113

+ C*-TM(*(t-l)$) - Ct-(t-l)*(-*(i-l)$)

= Ct-i6{XTitni) ~ Ct-Ti.n. ( ^ , n . )

+ C*-Ti|fl. (*( i - l )* + *t\0 + *t\l + 1" art,n1-2)

~ C«-T<in..i (^(t-l)5 + *t\0 + *t\l + • • • + «tvm-2)

+ + 0-T t | 2(^(,-l)5 + *t,o) ~ Ct-TiA (*(t-l)$ + *t\o) + G-Tu(*(t-1)$) ~ Ct-(t-l)*(*(t-l)$)

Moreover, the number n,- of jumps in I{ is a Poisson random variable with parameter 2dn6 and, conditioned on this number n;, the instants of jumps in I{ have the joint distribution of n, points chosen independently and uniformly over the interval /,-. Consequently one can write:

k

u(t,z) = £ Ex{l{r l f . . .A}«PE ,»( r«)]} {rif...,rfc} f=i

where, for example:

ni^i) = Ct-i6(X(i-l)s) - Ct-(i-l)6(X(i-l)6) when the interval /, = [(i — 1)6, i6) is white. The expression of i/p* is more complicated when the interval /, is black. It depends on the number of jumps nt- and of the skeleton in the interval. It is the logarithm of the integral of the exponential of a multiparameter Gaussian process over the tetrahedron {0 < s\ < «2 < • * • < sni < t} . For example, when n,- = 2 and I \ = {xt\o>£»,i> £*,2}> one has:

r,(Ti) = log (%l I exp[[Ct-« (X(f--i)* + *t-f0 + *,\i) (IV.9)

- Ct-ri,2(*(t-i)£ + *t\o + *i,i)] + [(t-Tit2{X(i-i)6 + Xt|0) - Ct-Titl(X(i_i)s + Xifl)]

+[Ct-Titl&(i-i)6) - Ct-(i-i)6(X{i-i)s)]dslds2) . (IV.10)

Notice that X{$ = ^(,-1)5 + £j,o + ^i.i + *t',2- Notice also that one has:

< e°"(r ') > = ea25/2 (IV.ll)

Page 123: Parabolic Anderson Problem and Intermittency

114 RENE A. CARMONA AND S.A. MOLCHANOV

for all a € R whenever the interval is white. Indeed, if the skeleton { r i , r 2 , • • • ,1^} is known, then -X^t-ijs is also known and the random variable f](Ti) is Af(0,6). Un­fortunately, formula (IV. 11) holds only for a = 1 when the interval is black. This last fact is obvious if one recall the definition of rj(Ti). See for example (IV.9). Let us use the obvious notation {I\} for the set of sample paths of the continuous time random walk which have exactly n,* jumps in the interval /,-, the sizes of the jumps being given by the elements Xij of I\\ Obviously, the {r,}'s are independent. Moreover:

JEAl{ri}}=(-^e-^s. (IV.12)

One can use Schwarz inequality to separate the contribution of the white intervals from the (smaller) contribution of the black intervals. Indeed:

ti(i,0) = E0{exp[ J2 I <Kt-.(X.) + £ / dCt-,(X,)]}

< (« (« ) (« ) i i« ( t ) ) 1 / 8

provided we set:

>>(<) = E0{exp[2 £ / dCt-.(X.)]} (IV.13)

and:

u«>\t) = E0{exp[2 £ / dCt-.(X.))}. (IV.14)

Obviously one has:

7(«) < \ f l i m s u p - l o g t i ^ ( t ) + l imsup-loguW(*)) . (IV.15) *• \ t—»oo * t—»oo * /

But using (IV.9) with a = 1 one has:

< « (6)(t) > = < E0{exp[2 £ / dCt-.(X,)]}

Page 124: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 115

= E 0 {< exp[2 £ / dCt-.(X.)] >}

where we used the independence of the increments of & and where N^ denotes the number of black intervals. The latter has a binomial distribution with parameters Jb and p = 1 — e~2dK°. As a consequence one gets:

< u^(t) >= [(1 - e-2dKS)e2S + e-2dKS]k.

Chebychev's inequality implies:

P{«W(Jfe5) > k2 < «<*>(*) >} < p

and the first Borel-Cantelli lemma gives:

lim sup -L log uW(Jfc6) < 7 log[(l - e~2cf^)e25 + e~2dKS] (IV. 16)

almost surely. For 8 —• oo one has:

1 , „ - ri-2dn6\„26 , „-2dn8

< Icdne26

4 l o g [ ( l - c - 2 ^ ) c w + c - 2 ^ ] < l ( l - c - 2 < w ) c 2

0 0

for some constant c > 1. We shall choose (later) the free parameter 6 giving the length of the intervals to be of the form 8 = 67log(l/Ac) for some small parameter 6' > 0 independent of K. In this case the left hand side of (IV.16) is smaller than the upper bound which we shall derive for the contribution of the white intervals. Consequently the proof of the upper bound reduces to the estimation of the quantity ii<w>(t) for large t and small AC. Notice that we considered t —• oo restricted to the subsequence k8. This is enough because we already know that the almost sure Lya-punov exponent exists and hence its value can be computed from any deterministic subsequence. Let us introduce the following notation. For a given skeleton sequence T = {Ti} • • •, Fk} we denote by &o = &o(F) the number of white intervals, i.e. the number of I'S for which rii = 0. Here we use the notation nt- = |I?i | — 1. One should think of nt- as the number of jumps in the interval I{. For each nontrivial increment sequence T = {XQ, XI, • • •, xn}

Page 125: Parabolic Anderson Problem and Intermittency

116 RENE A. CARMONA AND S.A. MOLCHANOV

we define the integers k(t, T) as the number of (black) intervals 7,'s for which I \ = I\ Obviously one has:

* = *o(f) + 5>(f,r) in

Notice also that for each fixed sequence of integers &o> k{T) for |T| ^ 2, the number of skeletons f = ( r i , • • •, Vk) of length k for which k(t, T) for all T is given by:

k\ *o!I l | r^o*(r) ! '

Moreover, the statistical weight of such a skeleton sequence is given by the probability that the continuous time random walk has, for i = 1, • • • ,fe, exactly nt- jumps in the interval /;, namely:

frW^r 2dKkS

Let us choose a sequence ao, . . . , a n . . . of positive constants and let us consider the event (defined on the probability space of the random potential dCt{x):

A^(rlt • • •,r*) = { W Ct-.(*(0 > aoko + X > £ *(r,)}-

where x^\ denotes the position at the beginning of the time interval I{ given by the skeleton,i.e.

i - l *{'

t '=l j = 0

But because of the very definition of a white interval one has:

£ /o^(x (O)-AT(0,M).

Using the standard upper bound:

P{C > A} < e"A2/2

which holds for £ ~ JV(Q,1) and A > 0, we get:

Page 126: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 117

F{A<*>(r1>...>r4)> < e x p [ - - i - ( a o * o + E a | r | * ( r ) ) 2 ] ZK°° |r|>i

< e x p t - ^ - ^ E ^ i r i l dV.17)

The strategy for the end of the proof is the following. We choose the sequence ao, a\y

• • • in such a way that:

£ v{Aik\rlr.-,rk)}<qk (iv.i8) {ri,-,rfc}

for some positive real number q < 1. Then, the first Borel Cantelli lemma gives the almost sure existence of an integer kf = fc'(u>) such that, whenever {Ti,** *,r^.} is a skeleton of length k > kf, then:

Yl J (t-s{x(i)) <aoko+ J2 a | l \ l * ( r 0-l<i<k;ni=QJIi \Ti\>2

Consequently, for such a i > fc'(u>) one has:

«<">(*, o) < E ^ n ^ ^ E l c - . W {ri , - ,r t } »=i I1''- \Vi\=iJIi

< E e-2dKkS ft T F ™rt2a°k° + 2 £ air.l*(rO] {ri,-.r*} «=i I1*'- |i\|>2

* e {r

ErA-2rJ l^rJ

. . . — — ^ - — e x p [ 2 a 0 A ; o + 2 £ a,r,.|*(r,-)] (IV.19) eo!II|i\|>2*(rO! )r<l>2

if one remembers that the definition of u^w>(t) involves only white intervals and if one computes the expectation Eo{ • } appearing in the definition (IV.13) of t<M(t) by first summing over all the possible skeletons {Ti, • • • ,1"*} of length k. The upper bound appearing in the right hand side depends on the skeleton only via the numbers

Page 127: Parabolic Anderson Problem and Intermittency

118 RENE A. CARMONA AND S.A. MOLCHANOV

| I \ and the integers fc(I\). Consequently one can sum over the various choices of these integers. One can for example sum first over all the possible choices of integers &o, &i) • • * summing up to k. One gets:

u(w)(«,0) < e~2d«kS (e2a° + ^ l e ^ + • • • + ^ l e ^ + . . \ (IV.20)

because there exist exactly 2d different I\ with | I \ | = 1, (2d)2 I \ with | I \ | = 2d, . . . The same sort of calculations (using again (IV.17)) show that, for each fixed k one also has:

£ JP{AW{rlr-.}rk)} < fe-ao/(2*)+ ] T e~aoCWM (ri,-,rfc) \ |r\|*o /

alk0 a0 ^ ,

|r|>i

= L-4W) + f](2d)V0°a</A .

Consequently we have:

1 7(w)(»c) = l imsup- loguW(f)

< _Mt + ) t o i ( ^ + £ 2f|2V) provided the condition:

e-*o/(2*) + £ ( 2 d ) V a ° a < / * < 1 £=2

is satisfied. Indeed, because of (IV.21) this last condition implies that (IV.18) is satisfied so that one can use the first Borel-Cantelli lemma. At this stage of the proof, the right thing to do would be to solve the optimization problem:

(ao.ai,--) ^ = 2 V.

Page 128: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 119

under the condition (constraint):

CO

e-a2o/(26) + £(2d)Vaoa</* = 1. *=2

Indeed, the solution of this minimization problem would give the best upper bound on 7^)(AC) in the present approach. We shall limit ourselves to the construction of an approximate solution. We choose:

aQ = (l + <5i)loglog(l/«) and at = £6, ^ = 2,3, •••

and 6 = 6/log(l/«:) for some small 8! > 0 as in the estimation of 7 ^ ( K ) . Under these conditions we have:

2 4 ^ i a 0 , a 0 , Q J 2 - 2ao

provided AC is small enough so that 2(2d)2e~2a° < 1. It is then easy to see that the above right hand side is strictly smaller than 1 with the choices of ao and 6 which we made. Moreover, in the asymptotic regime we chose we have:

e 2 a 0 + g ( 2 ^ e 2 a ^ e 2 a o

*=2 ^

as K —• 0 provided 6f < 1/2. Consequently we proved:

^ ) ( K ) < ^ o _ 2 d K 0

loglog(l/»c) - Ml/*)

and this concludes the proof of the upper bound. |

Page 129: Parabolic Anderson Problem and Intermittency

This page intentionally left blank

Page 130: Parabolic Anderson Problem and Intermittency

Chapter V

CONCLUDING REMARKS

We conclude the present memoir with a discussion of the above results and conjectures from the point view of the qualitative behavior of the solution u(t, x) of the random parabolic equation (1.1) which we understand as a stochastic partial differential equa­tion (in the sense of Stratonovich) when the time correlation length is zero.

When K <C 1, then the almost sure Lyapunov exponent J(K) < Ti(0)/2 and Ito's solution u^(t}x) = exp[—ri(0)t/2]u(t,x) tends to 0 exponentially fast. Moreover, all the statistical moments of order p > 2 tend to infinity. This means that all the particles (remember that u(t}x) has the interpretation of a mean density of particles!) are concentrated, for large times f, in a system of rare (with respect to t) exponential picks, the space between these picks being essentially empty. This is the analog of the localization phenomenon so typical of the classical Anderson's model in quantum disordered systems. This model is concerned with the random self-adjoint Schrodinger operator H = KA + £(x) with a time independent (stationary in our terminology) potential. In fact, in the classical Anderson model, the £(#) are assumed to be i.i.d. random variables. Exponential localization is known to hold (see for example [6]) for all K in dimension d = 1. It is expected to hold for all « in dimension d = 2, and it is also proven in all dimensions d > 1 for sufficiently small K. The conjectures we formulated in Chapter IV above concerning the almost sure Lyapunov exponent are mere translations of these facts to the nonstationary case.

If K ^> 1 and d > 3 we saw that:

7(«) = TlOO = ^ ^ 0,

121

Page 131: Parabolic Anderson Problem and Intermittency

122 RENE A. CARMONA AND S.A. MOLCHANOV

that the finite dimensional marginal distributions of the solution u^(tyx) of the Ito's problem are tight and that the second moment converges to a finite limit. In fact, we suspect (but we cannot prove) that this Ito's solution has a nontrivial limit in distribution when t —» oo. In other words we conjecture that:

„(*)(*, . ) «$ u(oo) (o0 | . )

for some nontrivial limit. We have intermittency of the highest moments for all K, i.e. we still have a system of high picks, but between these picks we now have a nondegenerate (bounded in probability) density of particles. This phenomenon can be considered as the analog in the nonstationary case of the existence of extended states and absolutely continuous spectrum in dimensions d > 3 in the case of the classical Anderson's model with K >> 1. Let us emphasize that this last result has not been proved rigorously even though it is believed to be true. The critical value kcr plays the role in the non-stationary case, of the (hypothetical) Anderson's mobility edge.

Page 132: Parabolic Anderson Problem and Intermittency

Bibliography

[1] H. S. Ahn, R. Carmona and S.A. Molchanov (1992): Parabolic Equations with a Levy Random Potential, (to appear in the Proc. Charlotte Conf. on Stochast. Part. Diff. Equations. Lect. Notes in Control Theory & Info. Science).

[2] A. Antoniadis and R. Carmona (1987): Eigenfunction Expansions for Infinite Dimensional Ornstein-Uhlenbeck Processes. Probab. Th. ReL Fields 74, 31-54.

[3] M. Avellaneda and A.J. Majda (1990): Mathematical Models with Exact Renor-malization for Turbulent Transport. Commun. Math. Phys. 131, 381-429.

[4] R. Carmona (1987): Tensor Products of Gaussian Measures. Lect. Notes in Math. 644, 96-124. Springer Verlag, New York, N.Y.

[5] R. Carmona, J. Gartner and S.A. Molchanov (1990): Large Time Asymptotics for the Stationary Anderson Parabolic Model, (preprint)

[6] R. Carmona and J. Lacroix (1990): Spectral Theory of Random Schrodinger Operators. Birkhaiiser, Boston.

[7] R. Carmona and S.A. Molchanov (1991): Intermittency and Phase Transitions for some Particle Systems in Random Media, (to appear in the Proc. Katata Symp. June 1990)

[8] R. Carmona, S.A. Molchanov and J. Noble (1992): Intermittency for the Con­tinuous Parabolic Anderson Model, (preprint)

[9] D.A. Dawson and G. IvanofF (1978): Branching Diffusions and Random Mea­sures. Adv. Probab. Relat. Top. 5, 61-104.

[10] J. Gartner and S.A. Molchanov (1990): Parabolic Problems for the Anderson Model. Comm. Math. Phys. 132, 613-655.

123

Page 133: Parabolic Anderson Problem and Intermittency

124 RENE A. CARMONA AND S.A. MOLCHANOV

[11] L. Gross (1967): Abstract Wiener Spaces, in Sixth Berkeley Symp. on Math. Statist, and Probab. II,

[12] J.F.C. Kingman (1977): Subadditive Processes, in Ecole d'Ete de Probabilites de Saint Flour} Led. Notes in Math. 539, 168-223. Springer Verlag, New York, N.Y.

[13] H.H. Kuo (1975): Gaussian Measures on Banach Spaces. Led. Notes in Math. 463 Springer Verlag, New York, N.Y.

[14] N. Ikeda and S. Watanabe (1980): Stochastis Differential Equations and Diffu­sion Processes. North Holland. New York, N.Y.

[15] S.A. Molchanov (1991): Ideas in the Theory of Random Media. Ada Applicandae Math. 22, 139-282.

[16] S.A. Molchanov and L.I. Pitterbarg (1987): Averaging in Turbulent Diffusion Problems. Prob. Theor. and Rand. Processes. 310, 35-47.

[17] S.A. Molchanov, S.A. Ruzmajkin and D.D. Sokolov (1984): Kinematic Dynamos in Random Flows. Sov. Phys. Uspehi 145, #4 .

[18] M. Reed and B. Simon (1983): Mathematical Methods of Physics. II Fourier Analysis and Self Adjointness. Academic Press New York, N.Y.

[19] M. Reed and B. Simon (1978): Mathematical Methods of Physics. IV Analysis of Operators. Academic Press New York, N.Y.

[20] Rozovskii (1991): Stochastic Evolution Equations. Kluwer.

[21] B. Simon (1982): Schrodinger Semigroups. Trans. Amer. Math. Soc. 7, 447-526.

[22] D.W. Stroock and S.R.S. Varadhan (1970): On the Support of Diffusion Pro­cesses with Applications to the Strong Maximum Principle. Proc. Sixth Berkeley Symp. Probab. Math. Statist. I l l , 333-360.

[23] D.W. Stroock and S.R.S. Varadhan (1980): Diffusion Processes in ]Rn. Springer Verlag, New York, N.Y.

[24] J.B. Walsh (1981): A stochastic model of neural response. Adv. Appl. Proba. 1180, 231-281.

Page 134: Parabolic Anderson Problem and Intermittency

PARABOLIC ANDERSON PROBLEM 125

[25] Ya. B. Zeldovich, S.A. Molchanov, S.A. Ruzmajkin and D.D. Sokolov (1985): Intermittency Passive Fields in Random Media. J. of Experim. and Theor. Phys. 89.

[26] Ya. B. Zeldovich, S.A. Molchanov, S.A. Ruzmajkin and D.D. Sokolov (1988): Intermittency, Diffusion and Generation in a Nonstationary Random Medium. Sov. Set. Rev. sect. C Math. Phys. Rev. 7, 1-110.

Rene A. Carmona and Stanislav Molchanov

Department of Mathematics University of California at Irvine

Irvine, CA 92717, USA

Page 135: Parabolic Anderson Problem and Intermittency

Editorial Information

To be published in the Memoirs, a. paper must be correct, new, nontrivial, and significant. Further, it must be well written and of interest to a substantial number of mathematicians. Piecemeal results, such as an inconclusive step toward an unproved major theorem or a minor variation on a known result, are in general not acceptable for publication. Transactions Editors shall solicit and encourage publication of worthy papers. Papers appearing in Memoirs are generally longer than those appearing in Transactions with which it shares an editorial committee.

As of January 6, 1994, the backlog for this journal was approximately 7 volumes. This estimate is the result of dividing the number of manuscripts for this journal in the Providence office that have not yet gone to the printer on the above date by the average number of monographs per volume over the previous twelve months, reduced by the number of issues published in four months (the time necessary for preparing an issue for the printer). (There are 6 volumes per year, each containing at least 4 numbers.)

A Copyright Transfer Agreement is required before a paper will be published in this journal. By submitting a paper to this journal, authors certify that the manuscript has not been submitted to nor is it under consideration for publi­cation by another journal, conference proceedings, or similar publication.

Information for Authors and Editors Memoirs are printed by photo-offset from camera copy fully prepared by

the author. This means that the finished book will look exactly like the copy submitted.

The paper must contain a descriptive title and an abstract that summarizes the article in language suitable for workers in the general field (algebra, analy­sis, etc.). The descriptive title should be short, but informative; useless or vague phrases such as "some remarks about" or "concerning" should be avoided. The abstract should be at least one complete sentence, and at most 300 words. In­cluded with the footnotes to the paper, there should be the 1991 Mathematics Subject Classification representing the primary and secondary subjects of the article. This may be followed by a list of key words and phrases describing the subject matter of the article and taken from it. A list of the numbers may be found in the annual index of Mathematical Reviews, published with the Decem­ber issue starting in 1990, as well as from the electronic service e-MATH [telnet e-MATH.ams.org (or telnet 130.44.1.100). Login and password are e-math]. For journal abbreviations used in bibliographies, see the list of serials in the latest Mathematical Reviews annual index. When the manuscript is submitted, au­thors should supply the editor with electronic addresses if available. These will be printed after the postal address at the end of each article.

Electronically prepared manuscripts. The AMS encourages submission of elec­tronically prepared manuscripts in AMS-T^X or 4 M ^ - ^ T E X because properly prepared electronic manuscripts save the author proofreading time and move more quickly through the production process. To this end, the Society has pre­pared "preprint" style files, specifically the amsppt style of AMS-TJJX and the amsart style of ^V(<S-I£TEX, which will simplify the work of authors and of the

Page 136: Parabolic Anderson Problem and Intermittency

production staff. Those authors who make use of these style files from the be­ginning of the writing process will further reduce their own effort. Electronically submitted manuscripts prepared in plain TgX or I^TEX do not mesh properly with the AMS production systems and cannot, therefore, realize the same kind of expedited processing. Users of plain TgX should have little difficulty learning 4M«S"TEX, and MgX users will find that AMS-I£TJJX is the same as W^K with additional commands to simplify the typesetting of mathematics.

Guidelines for Preparing Electronic Manuscripts provides additional assis­tance and is available for use with either A^jS-T^K or AMS-1£HJX>. Authors with FTP access may obtain Guidelines from the Society's Internet node e-MATH. ams. org (130.44.1.100). For those without FTP access Guidelines can be obtained free of charge from the e-mail address guide-elecO math. ams. org (Internet) or from the Customer Services Department, Ameri­can Mathematical Society, P.O. Box 6248, Providence, RI 02940-6248. When requesting Guidelines, please specify which version you want.

At the time of submission, authors should indicate if the paper has been pre­pared using AMS-TEK or A^jS-lifT^. The Manual for Authors of Mathematical Papers should be consulted for symbols and style conventions. The Manual may be obtained free of charge from the e-mail address cust-serv@math. ams. org or from the Customer Services Department, American Mathematical Society, P.O. Box 6248, Providence, RI 02940-6248. The Providence office should be supplied with a manuscript that corresponds to the electronic file being submit­ted.

Electronic manuscripts should be sent to the Providence office immediately after the paper has been accepted for publication. They can be sent via e-mail to [email protected] (Internet) or on diskettes to the Publications Department, American Mathematical Society, P. O. Box 6248, Providence, RI 02940-6248. When submitting electronic manuscripts please be sure to include a message indicating in which publication the paper has been accepted.

Two copies of the paper should be sent directly to the appropriate Editor and the author should keep one copy. The Guide for Authors of Memoirs gives detailed information on preparing papers for Memoirs and may be obtained free of charge from the Editorial Department, American Mathematical Society, P. O. Box 6248, Providence, RI 02940-6248. For papers not prepared electronically, model paper may also be obtained free of charge from the Editorial Department.

Any inquiries concerning a paper that has been accepted for publication should be sent directly to the Editorial Department, American Mathematical Society, P. O. Box 6248, Providence, RI 02940-6248.

Page 137: Parabolic Anderson Problem and Intermittency

Editors

This journal is designed particularly for long research papers (and groups of cognate papers) in pure and applied mathematics. Papers intended for publica­tion in the Memoirs should be addressed to one of the following editors:

Ordinary differential equations, partial differential equations, and ap­plied mathematics to JOHN MALLET-PARET, Division of Applied Mathe­matics, Brown University, Providence, RI 02912-9000; e-mail: am438000® brownvm.brown.edu.

Harmonic analysis, representation theory, and Lie theory to ROBERT J. STAN­TON, Department of Mathematics, The Ohio State University, 231 West 18th Avenue, Columbus, OH 43210-1174; e-mail: [email protected].

Ergodic theory, dynamical systems, and abstract analysis to DANIEL J. RUDOLPH, Department of Mathematics, University of Maryland, College Park, MD 20742; e-mail: djr0math.umd.edu.

Real and harmonic analysis to DAVID JERISON, Department of Mathemat­ics, MIT, Rm 2-180, Cambridge, MA 02139; e-mail: [email protected].

Algebra and algebraic geometry to EFIM ZELMANOV, Department of Math­ematics, University of Wisconsin, 480 Lincoln Drive, Madison, WI 53706-1388; e-mail: zelmanovQmath. wise. edu

Algebraic topology and differential topology to MARK MAHOWALD, Depart­ment of Mathematics, Northwestern University, 2033 Sheridan Road, Evanston, IL 60208-2730; e-mail: mark0math.nwu.edu.

Global analysis and differential geometry to ROBERT L. BRYANT, Depart­ment of Mathematics, Duke University, Durham, NC 27706-7706; e-mail: bryantQmath.duke.edu.

Probability and statistics to RICHARD DURRETT, Department of Mathematics, Cornell University, White Hall, Ithaca, NY 14853-7901; e-mail: rtdQcornella.cit .Cornell .edu.

Combinatorics and Lie theory to PHILIP J. HANLON, Department of Mathe­matics, University of Michigan, Ann Arbor, MI 48109-1003; e-mail: phil.hanlonQmath.lsa.umich.edu.

Logic and universal algebra to GREGORY L. CHERLIN, Department of Mathematics, Rutgers University, Hill Center, Busch Campus, New Brunswick, NJ 08903; e-mail: cherlinQmath.rutgers.edu.

Algebraic number theory, analytic number theory, and automorphic forms to WEN-CHING WINNIE LI, Department of Mathematics, Pennsylvania State University, University Park, PA 16802-6401.

Complex analysis and nonlinear partial differential equations to SUN-YUNG A. CHANG, Department of Mathematics, University of California at Los An­geles, Los Angeles, CA 90024-1555; e-mail: changOmath.ucla.edu.

All other communications to the editors should be addressed to the Managing Editor, PETER SHALEN, Department of Mathematics, Statistics, and Com­puter Science, University of Illinois at Chicago, Chicago, IL 60680; e-mail: U101230uicvm.uic.edu.

Page 138: Parabolic Anderson Problem and Intermittency