phd thesis mengdi zheng (summer) brown applied maths

232
Numerical methods for stochastic systems subject to generalized L´ evy noise by Mengdi Zheng Sc.B. in Physics, Zhejiang University; Hangzhou, Zhejiang, China, 2008 Sc.M. in Physics, Brown University; Providence, RI, USA, 2010 Sc.M. in Applied Math, Brown University; Providence, RI, USA, 2011 A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in The Division of Applied Mathematics at Brown University PROVIDENCE, RHODE ISLAND May 2015

Upload: zheng-mengdi

Post on 12-Apr-2017

99 views

Category:

Education


3 download

TRANSCRIPT

Page 1: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

Numerical methods for stochastic systems subject

to generalized Levy noise

by

Mengdi Zheng

Sc.B. in Physics, Zhejiang University; Hangzhou, Zhejiang, China, 2008

Sc.M. in Physics, Brown University; Providence, RI, USA, 2010

Sc.M. in Applied Math, Brown University; Providence, RI, USA, 2011

A dissertation submitted in partial fulfillment of the

requirements for the degree of Doctor of Philosophy

in The Division of Applied Mathematics at Brown University

PROVIDENCE, RHODE ISLAND

May 2015

Page 2: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

c© Copyright 2015 by Mengdi Zheng

Page 3: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

This dissertation by Mengdi Zheng is accepted in its present form

by The Division of Applied Mathematics as satisfying the

dissertation requirement for the degree of Doctor of Philosophy.

Date

George Em Karniadakis, Ph.D., Advisor

Recommended to the Graduate Council

Date

Hui Wang, Ph.D., Reader

Date

Xiaoliang Wan, Ph.D., Reader

Approved by the Graduate Council

Date

Peter Weber, Dean of the Graduate School

iii

Page 4: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

Vitae

Born on September 04, 1986 in Hangzhou, Zhejiang, China.

Education

• Sc.M. in Applied Math, Brown University; Providence, RI, USA, 2011

• Sc.M. in Physics, Brown University; Providence, RI, USA, 2010

• Sc.B. in Physics, Zhejiang University; Hangzhou, Zhejiang, China, 2008

Publications

• M. Zheng, G.E. Karniadakis, ‘Numerical Methods for SPDEs Driven by Multi-

dimensional Levy Jump Processes’, in preparation.

• M. Zheng, B. Rozovsky, G.E. Karniadakis, ‘Adaptive Wick-Malliavin Approx-

imation to Nonlinear SPDEs with Discrete Random Variables’, SIAM J. Sci.

Comput., accepted.

• M. Zheng, G.E. Karniadakis, ‘Numerical Methods for SPDEs with Tempered

Stable Processes’,SIAM J. Sci. Comput., accepted.

• M. Zheng, X. Wan, G.E. Karniadakis, ‘Adaptive Multi-element Polynomial

Chaos with Discrete Measure: Algorithms and Application to SPDEs’,Applied

iv

Page 5: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

Numerical Mathematics (2015), pp. 91-110. doi:10.1016/j.apnum.2014.11.006

.

v

Page 6: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

Acknowledgements

I would like to thank my advisor, Professor George Karniadakis, for his great support

and guidance throughout all my years of graduate school. I would also like to thank

my committee, Professor Hui Wang and Professor Xiaoliang Wan for taking the time

to read my thesis.

In addition, I would like to thank the many collaborators I have had the oppor-

tunity to work with on various projects. In particular, I thank Professor Xiaoliang

Wan for his patience in answering all of my questions and for his advice and help

during our work on adaptive multi-element stochastic collocation methods. I thank

Professor Boris Rozovsky for offering his innovative ideas and educational discussions

on our work on the Wick-Malliavin approximation for nonlinear stochastic partial

differential equations driven by discrete random variables.

I would like to gratefully acknowledge the support from the NSF/DMS (grant

DMS-0915077) and the Airforce MURI (grant FA9550-09-1-0613).

Lastly, I thank all my friends, and all current and former members of the CRUNCH

group for their company and encouragement. I would like to thank all of the wonder-

ful professors and staff at the Division of Applied Mathematics for making graduate

school a rewarding experience.

vi

Page 7: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

Abstract of “ Numerical methods for stochastic systems subject to generalized Levynoise ” by Mengdi Zheng, Ph.D., Brown University, May 2015

In this thesis, we aim to improve the accuracy and efficiency in uncertainty quan-

tification (UQ) of stochastic partial differential equations (SPDEs) driven by Levy

jump process (non-Gaussian and discontinuous). This topic was done by Monte

Carlo (MC) mostly in the past literature. We apply probabilistic methods as the

general Polynomial Chaos (gPC) method and deterministic methods as the general-

ized Fokker-Planck (FP) equation.

We first apply gPC on a nonlinear stochastic Korteweg-de Vries equation with

multiple discrete random variables (RVs) of arbitrary distributions with finite mo-

ments, by an adaptive multi-element probabilistic collocation method (ME-PCM).

We prove and verify the h− p convergence.

We, secondly, improve the gPC’s efficiency on a nonlinear stochastic Burgers

equation with multiple discrete RVs. We propose an adaptive Wick-Malliavin (WM)

expansion in terms of the Malliavin derivative of order Q to simplify the highly

coupled gPC propagator of order P and to control the error growth over time by

P − Q adaptivity. We observe exponential convergence with respect to Q when

Q ≥ P − 1 and compare the computational complexity between gPC and WM in

high dimensions.

Third, we develop probabilistic and deterministic approaches for moment statis-

tics of SPDEs with one-dimensional pure jump tempered α-stable Levy processes.

We showed the probability collocation method (PCM) more efficient than MC in low

dimensions. The generalized FP equation is a tempered fractional PDE (TFPDE).

We demonstrate the agreement in histograms from MC and the densities from the

TFPDE. We observe the moment statistics from TFPDE achieves higher accuracy

vii

Page 8: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

than PCM at a lower cost.

Fourth, we extend the probabilistic (MC, PCM) and deterministic (FP) ap-

proaches to SPDEs driven by multi-dimensional Levy jump processes. We

combine the analysis of variance (ANOVA) decomposition with the FP equation to

obtain moment statistics. We show the agreement in densities between MC and FP.

We observe the PCM converges to be more efficient than MC in moment statistics.

We hope our work can inspire researchers to consider using better methods other

than MC to simulate stochastic systems driven by Levy jump processes.

Page 9: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

Contents

Vitae iv

Acknowledgments vi

1 Introduction 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.1.1 Computational limitations for UQ of nonlinear SPDEs . . . . 31.1.2 Computational limitations for UQ of SPDEs driven by Levy

jump processes . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2 Introduction of TαS Levy jump processes . . . . . . . . . . . . . . . . 51.3 Organization of the thesis . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Simulation of Levy jump processes 92.1 Random walk approximation to Poisson processes . . . . . . . . . . . 102.2 KL expansion for Poisson processes . . . . . . . . . . . . . . . . . . . 112.3 Compound Poisson approximation to Levy jump processes . . . . . . 132.4 Series representation to Levy jump processes . . . . . . . . . . . . . . 18

3 Adaptive multi-element polynomial chaos with discrete measure:Algorithms and applications to SPDEs 203.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.2 Generation of orthogonal polynomials for discrete measures . . . . . . 22

3.2.1 Nowak method . . . . . . . . . . . . . . . . . . . . . . . . . . 233.2.2 Stieltjes method . . . . . . . . . . . . . . . . . . . . . . . . . . 243.2.3 Fischer method . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2.4 Modified Chebyshev method . . . . . . . . . . . . . . . . . . . 263.2.5 Lanczos method . . . . . . . . . . . . . . . . . . . . . . . . . . 283.2.6 Gaussian quadrature rule associated with a discrete measure . 303.2.7 Orthogonality tests of numerically generated polynomials . . . 31

3.3 Discussion about the error of numerical integration . . . . . . . . . . 343.3.1 Theorem of numerical integration on discrete measure . . . . . 34

viii

Page 10: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

3.3.2 Testing numerical integration with on RV . . . . . . . . . . . 413.3.3 Testing numerical integration with multiple RVs on sparse grids 42

3.4 Application to stochastic reaction equation and KdV equation . . . . 463.4.1 Reaction equation with discrete random coefficients . . . . . . 463.4.2 KdV equation with random forcing . . . . . . . . . . . . . . . 48

3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4 Adaptive Wick-Malliavin (WM) approximation to nonlinear SPDEswith discrete RVs 584.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.2 WM approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

4.2.1 WM series expansion . . . . . . . . . . . . . . . . . . . . . . . 604.2.2 WM propagators . . . . . . . . . . . . . . . . . . . . . . . . . 64

4.3 Moment statistics by WM approximation of stochastic reaction equa-tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.3.1 Reaction equation with one RV . . . . . . . . . . . . . . . . . 674.3.2 Reaction equation with multiple RVs . . . . . . . . . . . . . . 70

4.4 Moment statistics by WM approximation of stochastic Burgers equa-tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.4.1 Burgers equation with one RV . . . . . . . . . . . . . . . . . . 724.4.2 Burgers equation with multiple RVs . . . . . . . . . . . . . . . 75

4.5 Adaptive WM method . . . . . . . . . . . . . . . . . . . . . . . . . . 774.6 Computational complexity . . . . . . . . . . . . . . . . . . . . . . . . 78

4.6.1 Burgers equation with one RV . . . . . . . . . . . . . . . . . . 794.6.2 Burgers equation with d RVs . . . . . . . . . . . . . . . . . . . 82

4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

5 Numerical methods for SPDEs with 1D tempered α-stable (TαS)processes 865.1 Literature review of Levy flights . . . . . . . . . . . . . . . . . . . . . 875.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895.3 Stochastic models driven by tempered stable white noises . . . . . . . 895.4 Background of TαS processes . . . . . . . . . . . . . . . . . . . . . . 915.5 Numerical simulation of 1D TαS processes . . . . . . . . . . . . . . . 94

5.5.1 Simulation of 1D TαS processes by CP approximation . . . . 945.5.2 Simulation of 1D TαS processes by series representation . . . 975.5.3 Example: simulation of inverse Gaussian subordinators by CP

approximation and series representation . . . . . . . . . . . . 975.6 Simulation of stochastic reaction-diffusion model driven by TαS white

noises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005.6.1 Comparing CP approximation and series representation in MC 1015.6.2 Comparing CP approximation and series representation in PCM1025.6.3 Comparing MC and PCM in CP approximation or series rep-

resentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

ix

Page 11: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

5.7 Simulation of 1D stochastic overdamped Langevin equation driven byTαS white noises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095.7.1 Generalized FP equations for overdamped Langevin equations

with TαS white noises . . . . . . . . . . . . . . . . . . . . . . 1105.7.2 Simulating density by CP approximation . . . . . . . . . . . . 1155.7.3 Simulating density by TFPDEs . . . . . . . . . . . . . . . . . 116

5.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

6 Numerical methods for SPDEs with additive multi-dimensionalLevy jump processes 1216.1 Literature review of parameterized dependence structure in multi-

dimensional Gaussian processes . . . . . . . . . . . . . . . . . . . . . 1236.2 Literature review of generalized FP equations . . . . . . . . . . . . . 1246.3 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256.4 Diffusion model driven by multi-dimensional Levy jump process . . . 1266.5 Simulating multi-dimensional Levy pure jump processes . . . . . . . . 128

6.5.1 LePage’s series representation with radial decomposition ofLevy measure . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

6.5.2 Series representation with Levy copula . . . . . . . . . . . . . 1316.6 Generalize FP equation for SODEs with correlated Levy jump pro-

cesses and ANOVA decomposition of joint PDF . . . . . . . . . . . . 1426.7 Heat equation driven by bivariate Levy jump process in LePage’s rep-

resentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1496.7.1 Exact moments . . . . . . . . . . . . . . . . . . . . . . . . . . 1496.7.2 Simulating the moment statistics by PCM/S . . . . . . . . . . 1516.7.3 Simulating the joint PDF P (u1, u2, t) by the generalized FP

equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1556.7.4 Simulating moment statistics by TFPDE and PCM/S . . . . . 157

6.8 Heat equation driven by bivariate TS Clayton Levy jump process . . 1586.8.1 Exact moments . . . . . . . . . . . . . . . . . . . . . . . . . . 1586.8.2 Simulating the moment statistics by PCM/S . . . . . . . . . . 1626.8.3 Simulating the joint PDF P (u1, u2, t) by the generalized FP

equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646.8.4 Simulating moment statistics by TFPDE and PCM/S . . . . . 165

6.9 Heat equation driven by 10-dimensional Levy jump processes in LeP-age’s representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1676.9.1 Heat equation driven by 10-dimensional Levy jump processes

from MC/S . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1676.9.2 Heat equation driven by 10-dimensional Levy jump processes

from PCM/S . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696.9.3 Simulating the joint PDF P (u1, u2, ..., u10) by the ANOVA de-

composition of the generalized FP equation . . . . . . . . . . 1716.9.4 Simulating the moment statistics by 2D-ANOVA-FP with di-

mension d = 4, 6, 10, 14 . . . . . . . . . . . . . . . . . . . . . . 1836.10 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

x

Page 12: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

7 Summary and future work 1897.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1907.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

xi

Page 13: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

List of Tables

4.1 For gPC with different orders P and WM with a fixed order of P =3, Q = 2 in reaction equation (4.23) with one Poisson RV (λ = 0.5,

y0 = 1, k(ξ) = c0(ξ;λ)2!

+ c1(ξ;λ)3!

+ c2(ξ;λ)4!

, σ = 0.1, RK4 scheme withtime step dt = 1e − 4), we compare: (1) computational complexityratio to evaluate k(t, ξ)y(t;ω) between gPC and WM (upper); (2) CPUtime ratio to compute k(t, ξ)y(t;ω) between gPC and WM (lower).Wesimulated in Matlab on Intel (R) Core (TM) i5-3470 CPU @ 3.20 GHz. 69

4.2 Computational complexity ratio to evaluate u∂u∂x

term in Burgers equa-

tion with d RVs between WM and gPC, as C(P,Q)d

(P+1)3d: here we take the

WM order as Q = P − 1, and gPC with order P , in different dimen-sions d = 2, 3, and 50. . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

5.1 MC/CP vs. MC/S: error l2u2(T ) of the solution for Equation (5.1)

versus the number of samples s with λ = 10 (upper) and λ = 1(lower). T = 1, c = 0.1, α = 0.5, ε = 0.1, µ = 2 (upper and lower).Spatial discretization: Nx = 500 Fourier collocation points on [0, 2];temporal discretization: first-order Euler scheme in (5.22) with timesteps 4t = 1 × 10−5. In the CP approximation: RelTol = 1 × 10−8

for integration in U(δ). . . . . . . . . . . . . . . . . . . . . . . . . . . 102

xii

Page 14: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

List of Figures

2.1 Empirical CDF of KL Expansion RVs Y1, ..., YM with M = 10 KLexpansion terms, for a centered Poisson process (Nt − λt) of λ =10, Tmax = 1, with s = 10000 samples, and N = 200 points on thetime domain [0, 1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2 Exact sample path vs. sample path approximated by the KL ex-pansion: when λ is smaller, the sample path is better approximated.(Brownian motion is the limiting case for a centered poisson processwith very large birth rate.) . . . . . . . . . . . . . . . . . . . . . . . . 14

2.3 Exact mean vs. mean by KL expansion: when λ is larger, the KLrepresentation seems to be better. . . . . . . . . . . . . . . . . . . . . 14

2.4 Exact 2nd moment vs. 2nd moment by KL expansion with sampledcoefficients. The 2nd moments are not as well approximated as themean. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.1 Orthogonality defined in (3.27) with respect to the polynomial orderi up to 20 with Binomial distributions. . . . . . . . . . . . . . . . . . 32

3.2 CPU time to evaluate orthogonality for Binomial distributions. . . . . 333.3 Minimum polynomial order i (vertical axis) such that orth(i) is greater

than a threshold value. . . . . . . . . . . . . . . . . . . . . . . . . . . 343.4 Left: GENZ1 functions with different values of c and w; Right: h-

convergence of ME-PCM for function GENZ1. Two Gauss quadraturepoints, d = 2, are employed in each element corresponding to a degreem = 3 of exactness. c = 0.1, w = 1, ξ ∼ Bino(120, 1/2). Lanczosmethod is employed to compute the orthogonal polynomials. . . . . . 42

3.5 Left: GENZ4 functions with different values of c and w; Right: h-convergence of ME-PCM for function GENZ4. Two Gauss quadraturepoints, d = 2, are employed in each element corresponding to a degreem = 3 of exactness. c = 0.1, w = 1, ξ ∼ Bino(120, 1/2). Lanczosmethod is employed for numerical orthogonality. . . . . . . . . . . . . 43

3.6 Non-nested sparse grid points with respect to sparseness parameterk = 3, 4, 5, 6 for random variables ξ1, ξ2 ∼ Bino(10, 1/2), where theone-dimensional quadrature formula is based on Gauss quadrature rule. 44

3.7 Convergence of sparse grids and tensor product grids to approximateE[fi(ξ1, ξ2)], where ξ1 and ξ2 are two i.i.d. random variables associatedwith a distribution Bino(10, 1/2). Left: f1 is GENZ1 Right: f4 isGENZ4. Orthogonal polynomials are generated by Lanczos method. . 45

xiii

Page 15: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

3.8 Convergence of sparse grids and tensor product grids to approximateE[fi(ξ1, ξ2, ..., ξ8)], where ξ1,...,ξ8 are eight i.i.d. random variables asso-ciated with a distribution Bino(10, 1/2). Left: f1 is GENZ1 Right: f4

is GENZ4. Orthogonal polynomials are generated by Lanczos method. 453.9 p-convergence of PCM with respect to errors defined in equations

(3.54) and (3.55) for the reaction equation with t = 1, y0 = 1. ξ isassociated with negative binomial distribution with c = 1

2and β = 1.

Orthogonal polynomials are generated by the Stieltjes method. . . . . 473.10 Left: exact solution of the KdV equation (3.65) at time t = 0, 1.

Right: the pointwise error for the soliton at time t = 1 . . . . . . . . 493.11 p-convergence of PCM with respect to errors defined in equations

(3.67) and (3.68) for the KdV equation with t = 1. a = 1, x0 = −5and σ = 0.2, with 200 Fourier collocation points on the spatial domain[−30, 30]. Left: ξ ∼Pois(10); Right: ξ ∼ Bino(n = 5, p = 1/2)). aPCstands for arbitrary Polynomial Chaos, which is Polynomial Chaoswith respect to arbitrary measure. Orthogonal polynomials are gen-erated by Fischer’s method. . . . . . . . . . . . . . . . . . . . . . . . 50

3.12 h-convergence of ME-PCM with respect to errors defined in equations(3.67) and (3.68) for the KdV equation with t = 1.05, a = 1, x0 = −5,σ = 0.2, and ξ ∼ Bino(n = 120, p = 1/2), with 200 Fourier collocationpoints on the spatial domain [−30, 30], where two collocation pointsare employed in each element. Orthogonal polynomials are generatedby the Fischer method (left) and the Stieltjes method (right). . . . . 51

3.13 Adapted mesh with five elements with respect to Pois(40) distribution. 523.14 p-convergence of ME-PCM on a uniform mesh and an adapted mesh

with respect to errors defined in equations (3.67) and (3.68) for theKdV equation with t = 1, a = 1, x0 = −5, σ = 0.2, and ξ ∼Pois(40), with 200 Fourier collocation points on the spatial domain[−30, 30]. Left: Errors of the mean. Right: Errors of the secondmoment. Orthogonal polynomials are generated by the Nowak method. 53

3.15 ξ1, ξ2 ∼ Bino(10, 1/2): convergence of sparse grids and tensor productgrids with respect to errors defined in equations (3.67) and (3.68) forproblem (3.69), where t = 1, a = 1, x0 = −5, and σ1 = σ2 = 0.2,with 200 Fourier collocation points on the spatial domain [−30, 30].Orthogonal polynomials are generated by the Lanczos method. . . . 54

3.16 ξ1 ∼ Bino(10, 1/2) and ξ2 ∼ N (0, 1): convergence of sparse grids andtensor product grids with respect to errors defined in in equations(3.67) and (3.68) for problem (3.69), where t = 1, a = 1, x0 = −5,and σ1 = σ2 = 0.2, with 200 Fourier collocation points on the spatialdomain [−30, 30]. Orthogonal polynomials are generated by Lanczosmethod. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

3.17 Convergence of sparse grids and tensor product grids with respect toerrors defined in in equations (3.67) and (3.68) for problem (3.70),where t = 0.5, a = 0.5, x0 = −5, σi = 0.1 and ξi ∼ Bino(5, 1/2), i =1, 2, ..., 8, with 300 Fourier collocation points on the spatial domain[−50, 50]. Orthogonal polynomials are generated by Lanczos method. 56

xiv

Page 16: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

4.1 Reaction equation with one Poisson RV ξ ∼ Pois(λ) (d = 1): errorsversus final time T defined in (4.34) for different WM order Q inequation (4.27), with polynomial order P = 10, y0 = 1, λ = 0.5. We

used RK4 scheme with time step dt = 1e− 4; k(ξ) = c0(ξ;λ)2!

+ c1(ξ;λ)3!

+c2(ξ;λ)

4!, σ = 0.1(left); k(ξ) = c0(ξ;λ)

0!+ c1(ξ;λ)

3!+ c2(ξ;λ)

6!, σ = 1 (right). . . 68

4.2 Reaction equation with five Poisson RVs ξ1,...,5 ∼Pois(λ) (d = 5):error defined in (4.34) with respect to time, for different WM orderQ, with parameters: λ = 1, σ = 0.5, y0 = 1, polynomial order P =4, RK2 scheme with time step dt = 1e − 3, and k(ξ1, ξ2, ..., ξ5, t) =∑5

i=1 cos(it)c1(ξi) in equation (4.23). . . . . . . . . . . . . . . . . . . 704.3 Reaction equation with one Poisson RV ξ1 ∼Pois(λ) and one Binomial

RV ξ2 ∼ Bino(N, p) (d = 2): error defined in (4.34) with respect totime, for different WM order Q, with parameters: λ = 1, σ = 0.1,N = 10, p = 1/2, y0 = 1, polynomial order P = 10, RK4 scheme withtime step dt = 1e− 4, and k(ξ1, ξ2, t) = c1(ξ1)k1(ξ2) in equation (4.23). 71

4.4 Burgers equation with one Poisson RV ξ ∼Pois(λ) (d = 1, ψ1(x, t) =1): l2u2(T ) error defined in (6.62) versus time, with respect to dif-ferent WM order Q. Here we take in equation (4.32): polynomialexpansion order P = 6, λ = 1, ν = 1/2, σ = 0.1, IMEX (Crank-Nicolson/RK2) scheme with time step dt = 2e − 4, and 100 Fouriercollocation points on [−π, π]. . . . . . . . . . . . . . . . . . . . . . . 73

4.5 P-convergence for Burgers equation with one Poisson RV ξ ∼Pois(λ)(d = 1, ψ1(x, t) = 1): errors defined in equation (6.62) versus poly-nomial expansion order P , for different WM order Q, and by prob-abilistic collocation method (PCM) with P + 1 points with the fol-lowing parameters: ν = 1, λ = 1, final time T = 0.5, IMEX (Crank-Nicolson/RK2) scheme with time step dt = 5e− 4, 100 Fourier collo-cation points on [−π, π], σ = 0.5 (left), and σ = 1 (right). . . . . . . 73

4.6 Q-convergence for Burgers equation with one Poisson RV ξ ∼Pois(λ)(d = 1, ψ1(x, t) = 1): errors defined in equation (6.62) versus WMorder Q, for different polynomial order P , with the following param-eters: ν = 1, λ = 1, final time T = 0.5, IMEX(RK2/Crank-Nicolson)scheme with time step dt = 5e− 4, 100 Fourier collocation points on[−π, π], σ = 0.5 (left), and σ = 1 (right). The dashed lines serve as areference of the convergence rate. . . . . . . . . . . . . . . . . . . . . 74

4.7 Burgers equation with three Poisson RVs ξ1,2,3 ∼Pois(λ) (d = 3): errordefined in equation (6.62) with respect to time, for different WM orderQ, with parameters: λ = 0.1, σ = 0.1, y0 = 1, ν = 1/100, polynomialorder P = 2, IMEX (RK2/Crank-Nicolson) scheme with time stepdt = 2.5e− 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

4.8 Reaction equation with P-adaptivity and two Poisson RVs ξ1,2 ∼Pois(λ)(d = 2): error defined in (4.34) with two Poisson RVs by comput-ing the WM propagator in equation (4.27) with respect to time bythe RK2 method with: fixed WM order Q = 1, y0 = 1, ξ1,2 ∼Pois(1), a(ξ1, ξ2, t) = c1(ξ1;λ)c1(ξ2;λ), for fixed polynomial orderP (dashed lines), for varied polynomial order P (solid lines), forσ = 0.1 (left), and σ = 1 (right). Adaptive criterion values are:l2err(t) ≤ 1e− 8(left), and l2err(t) ≤ 1e− 6(right). . . . . . . . . . . 77

xv

Page 17: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

4.9 Burgers equation with P -Q-adaptivity and one Poisson RV ξ ∼Pois(λ)(d = 1, ψ1(x, t) = 1): error defined in equation (6.62) by comput-ing the WM propagator in equation (4.32) with IMEX (RK2/Crank-Nicolson) method (λ = 1, ν = 1/2, time step dt = 2e − 4). Fixedpolynomial order P = 6, σ = 1, and Q is varied (left); fixed WMorder Q = 3, σ = 0.1, and P is varied (right). Adaptive criterionvalue is: l2u2(T ) ≤ 1e− 10 (left and right). . . . . . . . . . . . . . . 78

4.10 Terms in∑Q

p=0

∑Pi=0 ui

∂uk+2p−i∂x

Ki,k+2p−i,p for each PDE in the WMpropagator for Burgers equation with one RV in equation (4.38) aredenoted by dots on the grids: here P = 4, Q = 1

2, k = 0, 1, 2, 3, 4. Each

grid represents a PDE in the WM propagator, labeled by k. Each dot

represents a term in the sum∑Q

p=0

∑Pi=0 ui

∂uk+2p−i∂x

Ki,k+2p−i,p . Thesmall index next to the dot is for p, x direction is the index i for ui,

and y direction is the index k + 2p − i in∂uk+2p−i

∂x. The dots on the

same diagonal line have the same index p. . . . . . . . . . . . . . . . 814.11 The total number of terms as um1...md

∂∂xuk1+2p1−m1,...,kd+2pd−mdKm1,k1+2p1−m1,p1

...Kmd,kd+2pd−md,pd in the WM propagator for Burgers equation with dRVs, as C(P,Q)d: for dimensions d = 2 (left) and d = 3 (right). Herewe assume P1 = ... = Pd = P and Q1 = ... = Qd = Q. . . . . . . . . . 83

5.1 Empirical histograms of an IG subordinator (α = 1/2) simulated viathe CP approximationat t = 0.5: the IG subordinator has c = 1,

λ = 3; each simulation contains s = 106 samples (we zoom in and plotx ∈ [0, 1.8] to examine the smaller jumps approximation); they arewith different jump truncation sizes as δ = 0.1 (left, dotted, CPU time1450s), δ = 0.02 (middle, dotted, CPU time 5710s), and δ = 0.005(right, dotted, CPU time 38531s). The reference PDFs are plotted inred solid lines; the one-sample K-S test values are calculated for eachplot; the RelTol of integration in U(δ) and bδ is 1× 10−8. These runswere done on Intel (R) Core (TM) i5-3470 CPU @ 3.20 GHz in Matlab. 99

5.2 Empirical histograms of an IG subordinator (α = 1/2) simulated viathe series representationat t = 0.5: the IG subordinator has c = 1,λ = 3; each simulation is done on the time domain [0, 0.5] and con-tains s = 106 samples (we zoom in and plot x ∈ [0, 1.8] to examinethe smaller jumps approximation); they are with different number oftruncations in the series as Qs = 10 (left, dotted, CPU time 129s),Qs = 100 (middle, dotted, CPU time 338s), and Qs = 1000 (right,dotted, CPU time 2574s). The reference PDFs are plotted in redsolid lines; the one-sample K-S test values are calculated for eachplot. These runs were done on Intel (R) Core (TM) i5-3470 CPU @3.20 GHz in Matlab. . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

5.3 PCM/CP vs. PCM/S: error l2u2(T ) of the solution for Equation (5.1)

versus the number of jumps Qcp (in PCM/CP) or Qs (in PCM/S)with λ = 10 (left) and λ = 1 (right). T = 1, c = 0.1, α = 0.5,ε = 0.1, µ = 2, Nx = 500 Fourier collocation points on [0, 2] (leftand right). In the PCM/CP: RelTol = 1 × 10−10 for integrationin U(δ). In the PCM/S: RelTol = 1 × 10−8 for the integration of

E[((αΓj2cT

)−1/α ∧ ηjξ1/αj )2]. . . . . . . . . . . . . . . . . . . . . . . . . . 107

xvi

Page 18: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

5.4 PCM vs. MC: error l2u2(T ) of the solution for Equation (5.1) versusthe number of samples s obtained by MC/CP and PCM/CP withδ = 0.01 (left) and MC/S with Qs = 10 and PCM/S (right). T = 1, c = 0.1, α = 0.5, λ = 1, ε = 0.1, µ = 2 (left and right). Spatialdiscretization: Nx = 500 Fourier collocation points on [0, 2] (left andright); temporal discretization: first-order Euler scheme in (5.22) withtime steps 4t = 1 × 10−5 (left and right). In both MC/CP andPCM/CP: RelTol = 1× 10−8 for integration in U(δ). . . . . . . . . 109

5.5 Zoomed in density Pts(t, x) plots for the solution of Equation (5.2)at different times obtained from solving Equation (5.37) for α = 0.5(left) and Equation (5.42) for α = 1.5 (right): σ = 0.4, x0 = 1, c = 1,λ = 10 (left); σ = 0.1, x0 = 1, c = 0.01, λ = 0.01 (right). We haveNx = 2000 equidistant spatial points on [−12, 12] (left); Nx = 2000points on [−20, 20] (right). Time step is 4t = 1 × 10−4 (left) and4t = 1× 10−5 (right). The initial conditions are approximated by δD20(left and right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

5.6 Density/CP vs. PCM/CP with the same δ: errors err1st and err2nd

of the solution for Equation (5.2) versus time obtained by the densityEquation (5.36) with CP approximation and PCM/CP in Equation(5.55). c = 0.5, α = 0.95, λ = 10, σ = 0.01, x0 = 1 (left); c = 0.01,α = 1.6, λ = 0.1, σ = 0.02, x0 = 1 (right). In the density/CP: RK2with time steps 4t = 2 × 10−3, 1000 Fourier collocation points on[−12, 12] in space, δ = 0.012, RelTol = 1× 10−8 for U(δ), and initialcondition as δD20 (left and right). In the PCM/CP: the same δ = 0.012as in the density/CP. . . . . . . . . . . . . . . . . . . . . . . . . . . 116

5.7 TFPDE vs. PCM/CP: error err2nd of the solution for Equation (5.2)versus time with λ = 10 (left) and λ = 1 (right). Problems we aresolving: α = 0.5, c = 2, σ = 0.1, x0 = 1 (left and right). ForPCM/CP: RelTol = 1 × 10−8 for U(δ) (left and right). For the TF-PDE: finite difference scheme in (5.47) with 4t = 2.5 × 10−5, Nx

equidistant points on [−12, 12], initial condition given by δD40 (left andright). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

5.8 Zoomed in plots for the density Pts(x, T ) by solving the TFPDE (5.37)and the empirical histogram by MC/CP at T = 0.5 (left) and T = 1(right): α = 0.5, c = 1, λ = 1, x0 = 1 and σ = 0.01 (left andright). In the MC/CP: sample size s = 105, 316 bins, δ = 0.01,RelTol = 1 × 10−8 for U(δ), time step 4t = 1 × 10−3 (left andright). In the TFPDE: finite difference scheme given in (5.47) with4t = 1 × 10−5 in time, Nx = 2000 equidistant points on [−12, 12]in space, and the initial conditions are approximated by δD40 (left andright). We perform the one-sample K-S tests here to test how twomethods match. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

6.1 An illustration of the applications of multi-dimensional Levy jumpmodels in mathematical finance. . . . . . . . . . . . . . . . . . . . . 128

6.2 Three ways to correlate Levy pure jump processes. . . . . . . . . . . 1296.3 The Levy measures of bivariate tempered stable Clayton processes

with different dependence strength (described by the correlation lengthτ) between their L1 and L2 components. . . . . . . . . . . . . . . . . 134

xvii

Page 19: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

6.4 The Levy measures of bivariate tempered stable Clayton processeswith different dependence strength (described by the correlation lengthτ) between their L++

1 and L++2 components (only in the ++ corner).

It shows how the dependence structure changes with respect to theparameter τ in the Clayton family of copulas. . . . . . . . . . . . . . 135

6.5 trajectory of component L++1 (t) (in blue) and L++

2 (t) (in green) thatare dependent described by Clayton copula with dependent structureparameter τ . Observe how trajectories get more similar when τ in-creases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

6.6 Sample path of (L1, L2) with marginal Levy measure given by equation(6.14), Levy copula given by (6.13), with each components such asF++ given by Clayton copula with parameter τ . Observe that when τis bigger, the ’flipping’ motion happens more symmetrically, becausethere is equal chance for jumps to be the same sign with the samesize, and for jumps to be the opposite signs with the same size. . . . 140

6.7 Sample paths of bivariate tempered stable Clayton Levy jump pro-cesses (L1, L2) simulated by the series representation given in Equa-tion (6.30). We simulate two sample paths for each value of τ . . . . . 141

6.8 An illustration of the three methods used in this paper to solve themoment statistics of Equation (6.1). . . . . . . . . . . . . . . . . . . 141

6.9 An illustration of the three methods used in this paper to solve themoment statistics of Equation (6.1). . . . . . . . . . . . . . . . . . . 148

6.10 An illustration of the three methods used in this paper to solve themoment statistics of Equation (6.1). . . . . . . . . . . . . . . . . . . 149

6.11 PCM/S (probabilistic) vs. MC/S (probabilistic): error l2u2(t) of the

solution for Equation (6.1) with a bivariate pure jump Levy processwith the Levy measure in radial decomposition given by Equation(6.9) versus the number of samples s obtained by MC/S and PCM/S(left) and versus the number of collocation points per RV obtainedby PCM/S with a fixed number of truncations Q in Equation (6.10)(right). t = 1 , c = 1, α = 0.5, λ = 5, µ = 0.01, NSR = 16.0%(left and right). In MC/S: first order Euler scheme with time step4t = 1× 10−3 (right). . . . . . . . . . . . . . . . . . . . . . . . . . . 152

6.12 PCM/series rep v.s. exact: T = 1. We test the noise/signal=variance/meanratio to be 4% at T = 1. . . . . . . . . . . . . . . . . . . . . . . . . . 153

6.13 PCM/series d-convergence and Q-convergence at T=1. We test thenoise/signal=variance/mean ratio to be 4% at t=1. The l2u2 error is

defined as l2u2(t) =||Eex[u2(x,t;ω)]−Enum[u2(x,t;ω)]||L2([0,2])

||Eex[u2(x,t;ω)]||L2([0,2]). . . . . . . . . . 154

6.14 MC v.s. exact: T = 1. Choice of parameters of this problem: weevaluated the moment statistics numerically with integration rela-tive tolerance to be 10−8. With this set of parameter, we test thenoise/signal=variance/mean ratio to be 4% at T = 1. . . . . . . . . . 154

6.15 MC v.s. exact: T = 2. Choice of parameters of this problem: weevaluated the moment statistics numerically with integration rela-tive tolerance to be 10−8. With this set of parameter, we test thenoise/signal=variance/mean ratio to be 10% at T = 2. . . . . . . . . 155

xviii

Page 20: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

6.16 FP (deterministic) vs. MC/S (probabilistic): joint PDF P (u1, u2, t)

of SODEs system in Equation (6.59) from FP Equation (6.41) (3Dcontour plot), joint histogram by MC/S (2D contour plot on the x-y plane), horizontal (subfigure) and vertical (subfigure) slices at thepeaks of density surface from FP equation and MC/S. Final time ist = 1 (left, NSR = 16.0%) and t = 1.5 (right). c = 1, α = 0.5,λ = 5, µ = 0.01. In MC/S: first-order Euler scheme with time step4t = 1×10−3, 200 bins on both u1 and u2 directions, Q = 40, samplesize s = 106. In FP: initial condition is given by MC data at t0 = 0.5,RK2 scheme with time step 4t = 4× 10−3. . . . . . . . . . . . . . . . 156

6.17 TFPDE (deterministic) vs. PCM/S (probabilistic): error l2u2(t) of

the solution for Equation (6.1) with a bivariate pure jump Levy pro-cess with the Levy measure in radial decomposition given by Equation(6.9) obtained by PCM/S in Equation (6.64) (stochastic approach)and TFPDE in Equation (6.41) (deterministic approach) versus time.α = 0.5, λ = 5, µ = 0.001 (left and right). c = 0.1 (left); c = 1 (right).In TFPDE: initial condition is given by δG2000 in Equation (6.67), RK2scheme with time step 4t = 4× 10−3. . . . . . . . . . . . . . . . . . 157

6.18 Exact mean, variance, and NSR versus time. The noise/signal ratiois 10% at T = 0.5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

6.19 PCM/S (probabilistic) vs. MC/S (stochastic): error l2u2(t) of the so-

lution for Equation (6.1) driven by a bivariate TS Clayton Levy pro-cess with Levy measure given in Section 1.2.2 versus the number oftruncations Q in the series representation (6.32) by PCM/S (left) andversus the number of samples s in MC/S with the series representation(6.30) by computing Equation (6.59) (right). t = 1 , α = 0.5, λ = 5,µ = 0.01, τ = 1 (left and right). c = 0.1, NSR = 10.1% (right). InMC/S: first order Euler scheme with time step 4t = 1× 10−2 (right). 163

6.20 Q-convergence (with various λ) of PCM/S in Equation (6.64):α = 0.5,µ = 0.01, RelTol of integration of moments of jump sizes is 1e-8. . . . 163

6.21 FP (deterministic) vs. MC/S (probabilistic): joint PDF P (u1, u2, t)

of SODE system in Equation (6.59) from FP Equation (6.40) (three-dimensional contour plot), joint histogram by MC/S (2D contour ploton the x-y plane), horizontal (left, subfigure) and vertical (right, sub-figure) slices at the peak of density surfaces from FP equation andMC/S. Final time t = 1 (left) and t = 1.5 (right). c = 0.5, α = 0.5,λ = 5, µ = 0.005, τ = 1 (left and right). In MC/S: first-order Eu-ler scheme with time step 4t = 0.02, Q = 2 in series representation(6.30), sample size s = 104. 40 bins on both u1 and u2 directions(left); 20 bins on both u1 and u2 directions (right). In FP: initialcondition is given by δG1000 in Equation (6.67), RK2 scheme with timestep 4t = 4× 10−3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

6.22 TFPDE (deterministic) vs. PCM/S (stochastic): error l2u2(t) of the

solution for Equation (6.1) driven by a bivariate TS Clayton Levy pro-cess with Levy measure given in Section 1.2.2 versus time obtained byPCM/S in Equation (6.81) (stochastic approach) and TFPDE (6.40)(deterministic approach). c = 1, α = 0.5, λ = 5, µ = 0.01 (left andright). c = 0.05, µ = 0.001 (left). c = 1, µ = 0.005 (right). InTFPDE: initial condition is given by δG1000 in Equation (6.67), RK2scheme with time step 4t = 4× 10−3. . . . . . . . . . . . . . . . . . 166

xix

Page 21: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

6.23 S-convergence in MC/S with 10-dimensional Levy jump processes:difference

in the E[u2] (left) between different sample sizes s and s = 106 (as areference). The heat equation (6.1) is driven by a 10-dimensional jumpprocess with a Levy measure (6.9) obtained by MC/S with series rep-resentation (6.10). We show the L2 norm of these differences versuss (right). Final time T = 1, c = 0.1, α = 0.5, λ = 10, µ = 0.01, timestep 4t = 4× 10−3, and Q = 10. The NSR at T = 1 is 6.62%. . . . . 168

6.24 Samples of (u1, u2) (left) and joint PDF of (u1, u2, ..., u10) on the(u1, u2) plane by MC (right) : c = 0.1, α = 0.5, λ = 10, µ = 0.01,dt =4e − 3 (first order Euler scheme), T = 1, Q = 10 (number of trunca-tions in the series representation), and sample size s = 106. . . . . . 168

6.25 Samples of (u9, u10) (left) and joint PDF of (u1, u2, ..., u10) on the(u9, u10) plane by MC (right) : c = 0.1, α = 0.5, λ = 10, µ = 0.01,dt =4e − 3 (first order Euler scheme), T = 1, Q = 10 (number of trunca-tions in the series representation), and sample size s = 106. . . . . . . 169

6.26 First two moments for solution of the heat equation (6.1) driven by a10-dimensional jump process with a Levy measure (6.9) obtained byMC/S with series representation (6.10) at final time T = 0.5 (left) andT = 1 (right) by MC : c = 0.1, α = 0.5, λ = 10, µ = 0.01, dt = 4e− 3(with the first order Euler scheme), Q = 10, and sample size s = 106. 170

6.27 Q-convergence in PCM/S with 10-dimensional Levy jump processes:difference

in the E[u2] (left) between different series truncation order Q andQ = 16 (as a reference). The heat equation (6.1) is driven by a10-dimensional jump process with a Levy measure (6.9) obtained byMC/S with series representation (6.10). We show the L2 norm of thesedifferences versus Q (right). Final time T = 1, c = 0.1, α = 0.5, λ =10, µ = 0.01. The NSR at T = 1 is 6.62%. . . . . . . . . . . . . . . . 170

6.28 MC/S V.s. PCM/S with 10-dimensional Levy jump processes:difference

between the E[u2] computed from MC/S and that computed fromPCM/S at final time T = 0.5 (left) and T = 1 (right). The heat equa-tion (6.1) is driven by a 10-dimensional jump process with a Levymeasure (6.9) obtained by MC/S with series representation (6.10).c = 0.1, α = 0.5, λ = 10, µ = 0.01. In MC/S, time step 4t = 4×10−3,Q = 10. In PCM/S, Q = 16. . . . . . . . . . . . . . . . . . . . . . . . 171

6.29 The function in Equation (6.82) with d = 2 (left up and left down)and the ANOVA approximation of it with effective dimension of two(right up and right down). A = 0.5, d = 2. . . . . . . . . . . . . . . . 174

6.30 The function in Equation (6.82) with d = 2 (left up and left down)and the ANOVA approximation of it with effective dimension of two(right up and right down). A = 0.1, d = 2. . . . . . . . . . . . . . . . 174

6.31 The function in Equation (6.82) with d = 2 (left up and left down)and the ANOVA approximation of it with effective dimension of two(right up and right down). A = 0.01, d = 2. . . . . . . . . . . . . . . 175

xx

Page 22: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

6.32 1D-ANOVA-FP V.s. 2D-ANOVA-FP with 10-dimensional Levy jump processes:themean (left) for the solution of the heat equation (6.1) driven by a 10-dimensional jump process with a Levy measure (6.9) computed by1D-ANOVA-FP, 2D-ANOVA-FP, and PCM/S. The L2 norms of dif-ference in E[u] between these three methods are plotted versus finaltime T (right). c = 1, α = 0.5, λ = 10, µ = 10−4. In 1D-ANOVA-FP:4t = 4 × 10−3 in RK2, M = 30 elements, q = 4 GLL points oneach element. In 2D-ANOVA-FP: 4t = 4 × 10−3 in RK2, M = 5elements on each direction, q2 = 16 GLL points on each element. InPCM/S: Q = 10 in the series representation (6.10). Initial conditionof ANOVA-FP: MC/S data at t0 = 0.5, s = 1 × 104, 4t = 4 × 10−3.NSR ≈ 18.24% at T = 1. . . . . . . . . . . . . . . . . . . . . . . . . 176

6.33 1D-ANOVA-FP V.s. 2D-ANOVA-FP with 10-dimensional Levy jump processes:thesecond moment (left) for the solution of heat equation (6.1) driven bya 10-dimensional jump process with a Levy measure (6.9) computedby 1D-ANOVA-FP, 2D-ANOVA-FP, and PCM/S. The L2 norms ofdifference in E[u2] between these three methods are plotted versusfinal time T (right). c = 1, α = 0.5, λ = 10, µ = 10−4. In 1D-ANOVA-FP: 4t = 4 × 10−3 in RK2, M = 30 elements, q = 4 GLL pointson each element. In 2D-ANOVA-FP: 4t = 4 × 10−3 in RK2, M = 5elements on each direction, q2 = 16 GLL points on each element. Ini-tial condition of ANOVA-FP: MC/S data at t0 = 0.5, s = 1 × 104,4t = 4×10−3. In PCM/S: Q = 10 in the series representation (6.10).NSR ≈ 18.24% at T = 1. . . . . . . . . . . . . . . . . . . . . . . . . 177

6.34 Evolution of marginal distributions pi(xi, t) at final time t = 0.6, ..., 1.c = 1 , α = 0.5, λ = 10, µ = 10−4. Initial condition from MC:t0 = 0.5, s = 104, dt = 4 × 10−3 , Q = 10. 1D-ANOVA-FP : RK2with time step dt = 4× 10−3, 30 elements with 4 GLL points on eachelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

6.35 Showing the mean E[u] at different final time by PCM (Q = 10) andby solving 1D-ANOVA-FP equations. c = 1 , α = 0.5, λ = 10,µ = 1e − 4. Initial condition from MC: s = 104, dt = 4−3 , Q = 10.1D-ANOVA-FP : RK2 with dt = 4 × 10−3, 30 elements with 4 GLLpoints on each element. . . . . . . . . . . . . . . . . . . . . . . . . . 179

6.36 The mean E[u2] at different final time by PCM (Q = 10) and bysolving 1D-ANOVA-FP equations. c = 1 , α = 0.5, λ = 10, µ = 1e−4.Initial condition from MC: s = 104, dt = 4 × 10−3 , Q = 10. 1D-ANOVA-FP : RK2 with dt = 4 × 10−3, 30 elements with 4 GLLpoints on each element. . . . . . . . . . . . . . . . . . . . . . . . . . 180

6.37 The mean E[u2] at different final time by PCM (Q = 10) and bysolving 2D-ANOVA-FP equations. c = 1 , α = 0.5, λ = 10, µ = 10−4.Initial condition from MC: s = 104, dt = 4 × 10−3 , Q = 10. 2D-ANOVA-FP : RK2 with dt = 4 × 10−3, 30 elements with 4 GLLpoints on each element. . . . . . . . . . . . . . . . . . . . . . . . . . 181

6.38 Left: sensitivity index defined in Equation (6.87) on each pair of(i, j), j ≥ i. Right: sensitivity index defined in Equation (6.88) oneach pair of (i, j), j ≥ i. They are computed from the MC data att0 = 0.5 with s = 104 samples. . . . . . . . . . . . . . . . . . . . . . 183

xxi

Page 23: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

6.39 Error growth by 2D-ANOVA-FP in different dimension d:the error growthl2u1rel(T ; t0) in E[u] defined in Equation (6.91) versus final time T(left); the error growth l2u2rel(T ; t0) in E[u2] defined in Equation(6.92) versus T (middle); l2u1rel(T = 1; t0) and l2u2rel(T = 1; t0)versus dimension d (right). We consider the diffusion equation (6.1)driven by a d-dimensional jump process with a Levy measure (6.9)computed by 2D-ANOVA-FP, and PCM/S. c = 1, α = 0.5, µ = 10−4

(left, middle, right). In Equation (6.49): 4t = 4 × 10−3 in RK2,M = 30 elements, q = 4 GLL points on each element. In Equation(6.50): 4t = 4 × 10−3 in RK2, M = 5 elements on each direction,q2 = 16 GLL points on each element. Initial condition of ANOVA-FP:MC/S data at t0 = 0.5, s = 1× 104, 4t = 4× 10−3, and Q = 16. InPCM/S: Q = 16 in the series representation (6.10). NSR ≈ 20.5%at T = 1 for all the dimensions d = 2, 4, 6, 10, 14, 18. These runs weredone on Intel (R) Core (TM) i5-3470 CPU @ 3.20 GHz in Matlab. . . 185

7.1 Summary of thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

xxii

Page 24: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

Chapter One

Introduction

Page 25: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

2

1.1 Motivation

Stochastic partial differential equations (SPDEs) are widely used for stochastic mod-

eling in diverse applications from physics, to engineering, biology and many other

fields, where the source of uncertainty includes random coefficients and stochastic

forcing. Our work is motivated by two things: application and shortcomings of past

work.

The source of uncertainty, practically, can be any non-Gaussian process. In many

cases, the random parameters are only observed at discrete values, which implies

that a discrete probability measure is more appropriate from the modeling point of

view. More generally, random processes with jumps are of fundamental importance in

stochastic modeling, e.g., stochastic-volatility jump-diffusion models in finance [186],

stochastic simulation algorithms for modeling diffusion, reaction and taxis in biol-

ogy [45], fluid models with jumps [173], quantum-jump models in physics [37], etc.

This serves as the motivation of our work on simulating SPDEs driven by discrete

random variables (RVs). Nonlinear SPDEs with discrete RVs and jump processes are

of practical use, since sources of stochastic excitations including uncertain parame-

ters and boundary/initial conditions are typically observed at discrete values. Many

complex systems of fundamental and industrial importance are significantly affected

by the underlying fluctuations/variations in random excitations, such as stochastic-

volatility jump-diffusion model in mathematical finance [14, 15, 26, 29, 30, 186],

stochastic simulation algorithms for modeling diffusion, reaction and taxis in biol-

ogy [45], truncated Levy flight model in turbulence [97, 118, 135, 173], quantum-jump

models in physics [37], etc.

An interesting model of uncertainty is Levy jump processes, such as tempered

Page 26: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

3

α stable (TαS) processes. TαS processes were introduced in statistical physics to

model turbulence, e.g., the truncated Levy flight model [97, 118, 135], and in math-

ematical finance to model stochastic volatility, e.g., the CGMY model [29, 30]. The

empirical distribution of asset prices is not always in a stable distribution or a nor-

mal distribution. The tail is heavier than a normal distribution and thinner than a

stable distribution [22]. Therefore, the TαS process was introduced as the CGMY

model to modify the Black and Scholes model. More details of white noise the-

ory for Levy jump processes with applications to SPDEs and finance can be found

in [20, 134, 108, 109, 138]. Although one-dimensional (1D) jump models are con-

structed in finance with Levy processes [16, 98, 112], many financial models require

multi-dimensional Levy jump processes with dependent components [35], such as

basket option pricing [106], portfolio optimization [43], and risk scenarios for portfo-

lios [35]. Multi-dimensional Gaussian models are widely applied in finance because

of the simplicity in the description of dependence structures [148], however in some

applications we must take jumps in price processes into account [29, 30].

This work is constructed on previous work on the field of uncertainty quan-

tification (UQ), which includes the generalized polynomial chaos method (gPC),

multi-element generalized polynomial chaos method (MEgPC), probabilistic collo-

cation method (PCM), sparse collocation method, analysis of variance (ANOVA),

and many other variants (see, e.g., [9, 10, 55, 57, 63, 171] and references therein).

1.1.1 Computational limitations for UQ of nonlinear SPDEs

Numerically, nonlinear SPDEs with discrete processes are often solved by gPC in-

volving a system of coupled deterministic nonlinear equations [184], or probabilistic

collocation method (PCM) [55, 185, 192] involving nonlinear corresponding PDEs

Page 27: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

4

obtained at the collocation points. For stochastic processes with short correlation

length, the number of RVs required to represent the processes can be extremely large.

Therefore, the number of equations involved in the gPC propagator for a nonlinear

SPDE driven by such a process can be very large and highly coupled.

1.1.2 Computational limitations for UQ of SPDEs driven by

Levy jump processes

For simulations of Levy jump processes as TαS, we do not know the distribution of in-

crements explicitly [35], but we may still simulate the trajectories of TαS processes by

the random walk approximation [11]. However, the random walk approximation does

not identify the jump time and size of the large jumps precisely [153, 154, 155, 156].

In the heavy tailed case, large jumps contribute more than small jumps in functionals

of a Levy process. Therefore, in this case, we have mainly used two other ways to

simulate the trajectories of a TαS process numerically: compound Poisson (CP) ap-

proximation [35] and series representation [154]. In the CP approximation, we treat

the jumps smaller than a certain size δ by their expectation, and treat the remaining

process with larger jumps as a CP process [35]. There are six different series represen-

tations of Levy jump processes. They are the inverse Levy measure method [49, 94],

LePage’s method [104], Bondesson’s method [25], thinning method [154], rejection

method [153], and shot noise method [154, 155]. However, in each representation,

the number of RVs involved is very large (such as 100). In this work, for TαS pro-

cesses, we will use the shot noise representation for Lt as a series representation

method because the tail of Levy measure of a TαS process does not have an explicit

inverse [156]. Both the CP and the series approximation converge slowly when the

jumps of the Levy process are highly concentrated around zero, however both can

Page 28: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

5

be improved by replacing the small jumps via Brownian motions [7]. The α-stable

distribution was introduced to model the empirical distribution of asset prices [116],

replacing the normal distribution. In the past literature, the simulation of SDEs or

functionals of TαS processes was mainly done via MC [142]. MC for functionals of

TαS processes is possible after a change of measure that transform TαS processes

into stable processes [144].

1.2 Introduction of TαS Levy jump processes

TαS processes were introduced in statistical physics to model turbulence, e.g., the

truncated Levy flight model [97, 118, 135], and in mathematical finance to model

stochastic volatility, e.g., the CGMY model [29, 30]. Here, we consider a symmet-

ric TαS process (Lt) as a pure jump Levy martingale with characteristic triplet

(0, ν, 0) [21, 157] (no drift and no Gaussian part). The Levy measure is given by [35]

1:

ν(x) =ce−λ|x|

|x|α+1, 0 < α < 2. (1.1)

This Levy measure can be interpreted as an Esscher transformation [62] from that

of a stable process with exponential tilting of the Levy measure. The parameter

c > 0 alters the intensity of jumps of all given sizes; it changes the time scale of

the process. Also, λ > 0 fixes the decay rate of big jumps, while α determines the

relative importance of smaller jumps in the path of the process2. The probability

density for Lt at a given time is not available in a closed form (except when α = 12

3).

1In a more generalized form, Levy measure is ν(x) = c−e−λ−|x|

|x|α+1 Ix<0 + c+e−λ+|x|

|x|α+1 Ix>0. We may

have different coefficients c+, c−, λ+, λ− on the positive and the negative jump parts.2In the case when α = 0, Lt is the gamma process.3See inverse Gaussian processes.

Page 29: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

6

The characteristic exponent for Lt is [35]:

Φ(s) = s−1 log E[eisLs ] = 2Γ(−α)λαc[(1− is

λ)α − 1 +

isα

λ], α 6= 1, (1.2)

where Γ(x) is the Gamma function and E is the expectation. By taking the deriva-

tives of the characteristic exponent we obtain the mean and variance:

E[Lt] = 0, V ar[Lt] = 2tΓ(2− α)cλα−2. (1.3)

In order to derive the second moments for the exact solutions of Equations (5.1) and

(5.2), we introduce the Ito isometry. The jump of Lt is defined by 4Lt = Lt − Lt− .

We define the Poisson random measure N(t, U) as [78, 133, 137]:

N(t, U) =∑

0≤s≤t

I4Ls∈U , U ∈ B(R0), U ⊂ R0. (1.4)

Here R0 = R\0, and B(R0) is the σ-algebra generated by the family of all Borel

subsets U ⊂ R, such that U ⊂ R0; IA is an indicator function. The Poisson random

measure N(t, U) counts the number of jumps of size 4Ls ∈ U at time t. In order

to introduce the Ito isometry, we define the compensated Poisson random measure

N [78] as:

N(dt, dz) = N(dt, dz)− ν(dz)dt = N(dt, dz)− E[N(dt, dz)]. (1.5)

The TαS process Lt (as a martingale) can be also written as:

Lt =

∫ t

0

∫R0

zN(dτ, dz). (1.6)

For any t, let Ft be the σ-algebra generated by (Lt, N(ds, dz)), z ∈ R0, s ≤ t. We

define the filtration to be F = Ft, t ≥ 0. If a stochastic process θt(z), t ≥ 0, z ∈ R0

Page 30: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

7

is Ft-adapted, we have the following Ito isometry [133]:

E[(

∫ T

0

∫R0

θt(z)N(dt, dz))2] = E[

∫ T

0

∫R0

θ2t (z)ν(dz)dt]. (1.7)

1.3 Organization of the thesis

In Chapter 2, we discuss four methods to simulate Levy jump processes preliminar-

ies and background information to the reader: 1. random walk approximation; 2.

Karhumen-Loeve expansion; 3. compound Poisson approximation; 4. series repre-

sentation.

In Chapter 3, five methods of generating orthogonal polynomial bases with re-

spect to discrete measures are presented, followed by a discussion about the error of

numerical integration. Numerical solutions of the stochastic reaction equation and

Korteweg- de Vries (KdV) equation, including adaptive procedures, are explained.

Then, we summarize the work. In the appendices, we provide more details about

the deterministic KdV equation solver, and the adaptive procedure.

In Chapter 4, we define the Wick-Malliavin (WM) expansion and derive the Wick-

Malliavin propagators for a stochastic reaction equation and a stochastic Burgers

equation. We present several numerical results for SPDEs with one RV and multiple

RVs, including an adaptive procedure to control the error in time. We also compare

the computational complexity between gPC and WM for stochastic Burgers equation

with the same level of accuracy. Also, we provide an iterative algorithm to generate

coefficients in the WM approximation.

In Chapter 5, we compare the CP approximation and the series representation

Page 31: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

8

of a TαS process. We solve a stochastic reaction-diffusion with TαS white noise

via MC and PCM, both with CP approximation or series representation of the TαS

process. We simulate the density evolution for an overdamped Langevin equation

with TαS white noise via the corresponding generalized FP equations. We compare

the statistics obtained from the FP equations and MC or PCM methods. Also, we

provide algorithms of the rejection method and simulation of CP processes. We also

provide the probability distributions to simplify the series representation.

In Chapter 6, by MC, PCM and FP, we solve the moment statistics for the solu-

tion of a heat equation driven by a 2D Levy noise in LePage’s series representation.

By MC, PCM and FP, we solve the moment statistics for the solution of a heat

equation driven by a 2D Levy noise described by Levy copulas. By MC, PCM and

FP, we solve the moment statistics for the solution of the heat equation driven by

a 10D Levy noise in LePage’s series representation, where the FP equation is de-

composed by the unanchored ANOVA decomposition. We also examine the error

growth versus the dimension of the Levy process. Also, we show how we simplify

the multi-dimensional integration in FP equations into the 1D and 2D integrals.

In Chapter 7, lastly, we summarize the scope of SPDEs, the scope of stochastic

processes, and the methods we have experimented so far. We summarize the compu-

tational cost and accuracy in our numerical experiments. We suggest feasible future

works on methodology and applications.

Page 32: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

Chapter Two

Simulation of Levy jump processes

Page 33: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

10

In general there are three ways to generate a Levy process [154]: random walk ap-

proximation, series representation and compound Poisson (CP) approximation. The

random walk approximation approximate the continuous random walk by a discrete

random walk on a discrete time sequence, if the marginal distribution of the process is

known. It is often used to simulate Levy jump processes with large jumps, but it does

not identify the jump time and size of the large jumps precisely [153, 154, 155, 156].

We attempt to simulate a non-Gaussian process by Karhumen-Loeve (KL) expansion

here as well by computing the covariance kernel and its eigenfunctions. In the CP

approximation, we treat the jumps smaller than a certain size by their expectation as

a drift term, and the remaining process with large jumps as a CP process [35]. There

are six different series representations of Levy jump processes. They are the inverse

Levy measure method [49, 94], LePage’s method [104], Bondesson’s method [25],

thinning method [154], rejection method [153], and shot noise method [154, 155].

2.1 Random walk approximation to Poisson pro-

cesses

For a Levy jump process Lt, on a fixed time grid [t0, t1, t2, ..., tN ], we may approximate

Lt by Lt =∑N

i=1 XiIt < ti. When the marginal distribution of Lt is known,

the distribution of Xi is known to be Lti−ti−1. Therefore, on the fixed time grid,

we may generate the RVs Xi by sampling from the known distribution. When Lt

is composed of large jumps with low intensity (or rate of jumps), this can be a

good approximation. However, we are mostly interested in Levy jump processes

with infinite activity (with high rates of jumps), therefore this will not be a good

approximation for the kind of processes we are going to consider, such as tempered

Page 34: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

11

α stable processes.

2.2 KL expansion for Poisson processes

Let us first take a Poisson process N(t;ω) with intensity λ on a computational time

domain [0, T ] as an example. We mimic the KL expansion for Gaussian processes to

simulate non-Gaussian processes as Poisson processes.

• First we calculate the covariance kernel (assuming t′ > t).

Cov(N(t;ω)N(t′;ω)) = E[N(t;ω)N(t′;ω)]− E[N(t;ω)]E[N(t′;ω)]

= E[N(t;ω)N(t;ω)] + E[N(t;ω)]E[N(t′ − t;ω)]− E[N(t;ω)]E[N(t′;ω)]

= λt, t′ > t,

(2.1)

Therefore, the covariance kernel is

Cov(N(t;ω)N(t′;ω)) = λ(t∧

t′) (2.2)

• The eigenvalues and eigenfunctions for this kernel would be:

ek(t) =√

2sin(k − 1

2)πt (2.3)

and

λk =1

(k − 12)2π2

(2.4)

where k=1,2,3,...

• The stochastic process Nt approximated by finite number of terms in the KL

Page 35: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

12

expansion can be written as:

N(t;ω) = λt+M∑i=1

√λiYiei(t) (2.5)

where ∫ 1

0

e2k(t)dt = 1 (2.6)

and ∫ T

0

e2k(t)dt = T − sin[T (1− 2k)π]

π(1− 2k)(2.7)

and they are orthogonal.

• The distribution of Yk can be calculated by the following. Given a sample path

ω ∈ Ω,

< N(t;ω)− λt, ek(t) >=Yk√λ

π(k − 12)< ek(t), ek(t) >

= 2Yk√λ[T (2k − 1)π − sin((2k − 1)πT )

π2(2k − 1)2]

=< N(t;ω), ek(t) > −√

π2[−2πTcos(πT/2) + 4sin(πT/2)].

(2.8)

Therefore,

Yk =π2(2k − 1)2[< N(t;ω), ek(t) > −

√2λπ2 [−2πTcos(πT/2) + 4sin(πT/2)]]

2√λ[T (2k − 1)π − sin((2k − 1)πT ]

.

(2.9)

From each sample path each sample path ω, we can calculate the value of

Y1, ..., YM . In this way the distribution of Y1, ..., YM can be sampled. Nu-

merically, if we simulate enough number of samples of a Poisson process (by

simulating the jump times and jump sizes separately), we may have the em-

pirical distribution of RVs Y1, ..., YM .

• Now let us see how well the sample paths of the Poisson process Nt are ap-

Page 36: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

13

5 4 3 2 1 0 1 2 3 4 50

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Empirical CDF for KL Exp RVs

i

CDF

Figure 2.1: Empirical CDF of KL Expansion RVs Y1, ..., YM with M = 10 KL expansion terms,for a centered Poisson process (Nt−λt) of λ = 10, Tmax = 1, with s = 10000 samples, and N = 200points on the time domain [0, 1].

proximated by the KL expansion.

• Now let us see how well the mean of the Poisson process Nt are approximated

by the KL expansion.

• Now let us see how well the second moment of the Poisson process Nt are

approximated by the KL expansion.

2.3 Compound Poisson approximation to Levy jump

processes

Let us take a tempered α stable process (TαS) as an example here. TαS processes

were introduced in statistical physics to model turbulence, e.g., the truncated Levy

flight model [97, 118, 135], and in mathematical finance to model stochastic volatility,

e.g., the CGMY model [29, 30]. Here, we consider a symmetric TαS process (Lt) as

a pure jump Levy martingale with characteristic triplet (0, ν, 0) [21, 157] (no drift

Page 37: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

14

0 1 2 3 4 5100

50

0

50

100

150

200

250

300Exact and Approx ed Sample Path by KL Exp

time

N(t;

0)

ex sample pathapprox ed sample path

10 Exp Terms=50

Tmax=5

0 1 2 3 4 51

0

1

2

3

4

5

6Exact and Approx ed Sample Path by KL Exp

time

N(t;

0)

exact sample pathapprox ed sample path

10 Exp Terms=1

Tmax=5

Figure 2.2: Exact sample path vs. sample path approximated by the KL expansion: when λis smaller, the sample path is better approximated. (Brownian motion is the limiting case for acentered poisson process with very large birth rate.)

0 1 2 3 4 550

0

50

100

150

200

250

300Mean Rep by KL Exp w/ Sampled Coefficients

time

<N(t;

)>

ExactKL Exp

10 Exp Terms=50

Tmax=5200 Samples

0 1 2 3 4 56

4

2

0

2

4

6

8

10Mean Rep by KL Exp w/ Sampled Coefficients

time

<N(t;

)>

ExactKL Exp

10 Exp Terms=1

Tmax=5200 Samples

Figure 2.3: Exact mean vs. mean by KL expansion: when λ is larger, the KL representationseems to be better.

0 1 2 3 4 50

1

2

3

4

5

6

7 x 104 2nd Moment Rep by KL Exp w/ Sampled Coefficients

time

<N2 (t;

)>

ExactKL Exp

10 Exp Terms=50

Tmax=5200 Samples

0 1 2 3 4 50

10

20

30

40

50

602nd Moment Rep by KL Exp w/ Sampled Coefficients

Time

<N2 (t;

)>

ExactKL Exp10 Exp Terms

=1Tmax=5200 Samples

Figure 2.4: Exact 2nd moment vs. 2nd moment by KL expansion with sampled coefficients. The2nd moments are not as well approximated as the mean.

Page 38: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

15

and no Gaussian part). The Levy measure is given by [35] 1:

ν(x) =ce−λ|x|

|x|α+1, 0 < α < 2. (2.10)

This Levy measure can be interpreted as an Esscher transformation [62] from that

of a stable process with exponential tilting of the Levy measure. The parameter

c > 0 alters the intensity of jumps of all given sizes; it changes the time scale of

the process. Also, λ > 0 fixes the decay rate of big jumps, while α determines the

relative importance of smaller jumps in the path of the process2. The probability

density for Lt at a given time is not available in a closed form (except when α = 12

3).

The characteristic exponent for Lt is [35]:

Φ(s) = s−1 log E[eisLs ] = 2Γ(−α)λαc[(1− is

λ)α − 1 +

isα

λ], α 6= 1, (2.11)

where Γ(x) is the Gamma function and E is the expectation. By taking the deriva-

tives of the characteristic exponent we obtain the mean and variance:

E[Lt] = 0, V ar[Lt] = 2tΓ(2− α)cλα−2. (2.12)

In the CP approximation, we simulate the jumps larger than δ as a CP process

and replace jumps smaller than δ by their expectation as a drift term [35]. Here

we explain the method to approximate a TαS subordinator Xt (without a Gaussian

part and a drift) with the Levy measure ν(x) = ce−λx

xα+1 Ix>0 (positive jumps only); this

method can be generalized to a TαS process with both positive and negative jumps.

1In a more generalized form, Levy measure is ν(x) = c−e−λ−|x|

|x|α+1 Ix<0 + c+e−λ+|x|

|x|α+1 Ix>0. We may

have different coefficients c+, c−, λ+, λ− on the positive and the negative jump parts.2In the case when α = 0, Lt is the gamma process.3See inverse Gaussian processes.

Page 39: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

16

The CP approximation Xδt for this TαS subordinator Xt is:

Xt ≈ Xδt =

∑s≤t

4XsI4Xs≥δ+E[∑s≤t

4XsI4Xs<δ] =∞∑i=1

Jδi It≤Ti+bδt ≈

Qcp∑i=1

Jδi It≤Ti+bδt,

(2.13)

We introduce Qcp here as the number of jumps occurred before time t. The first

term∑∞

i=1 Jδi It≤Ti is a compound Poisson process with jump intensity

U(δ) = c

∫ ∞δ

e−λxdx

xα+1(2.14)

and jump size distribution pδ(x) = 1U(δ)

ce−λx

xα+1 Ix≥δ for Jδi . The jump size random

variables (RVs) Jδi are generated via the rejection method [41]. This is the algorithm

of rejection method to generate RVs with distribution pδ(x) = 1U(δ)

ceλx

xα+1 Ix≥δ for CP

approximation [41]

The distribution pδ(x) can be bounded by

pδ(x) ≤ δ−αe−λδ

αU(δ)f δ(x), (2.15)

where f δ(x) = αδ−α

xα+1 Ix≥δ. The algorithm to generate RVs with distribution pδ(x) =

1U(δ)

ceλx

xα+1 Ix≥δ is [35, 41]:

• REPEAT

• Generate RVs W and V : independent and uniformly distributed on [0, 1]

• Set X = δW−1/α

Page 40: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

17

• Set T = fδ(X)δ−αe−λδ

pδ(X)αU(δ)

• UNTIL V T ≤ 1

• RETURN X .

Here, Ti is the i-th jump arrival time of a Poisson process with intensity U(δ).

The accuracy of CP approximation method can be improved by replacing the smaller

jumps by a Brownian motion [7], when the growth of the Levy measure near zero

is fast. The second term functions as a drift term, bδt, resulted from truncating

the smaller jumps. The drift is bδ = c∫ δ

0e−λxdxxα

. This integration diverges when

α ≥ 1, therefore the CP approximation method only applies to TαS processes with

0 < α < 1. In this paper, both the intensity U(δ) and drift bδ are calculated

via numerical integrations with Gauss-quadrature rules [59] with a specified relative

tolerance (RelTol) 4. In general, there are two algorithms to simulate a compound

Poisson process [35]: the first method is to simulate the jump time Ti by exponentially

distributed RVs and take the number of jumps Qcp as large as possible; the second

method is to first generate and fix the number of jumps, then generate the jump time

by uniformly distributed RVs on [0, t]. Algorithms for simulating a CP process (the

second kind) with intensity and the jump size distribution in their explicit forms are

known on a fixed time grid [35]. Here we describe how to simulate the trajectories of a

CP process with intensity U(δ) and jump size distribution νδ(x)U(δ)

, on a simulation time

domain [0, T ] at time t. The algorithm to generate sample paths for CP processes

without a drift:

4The RelTol of numerical integration is defined as |q−Q||Q| , where q is the computed value of the

integral and Q is the unknown exact value.

Page 41: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

18

• Simulate an RV N from Poisson distribution with parameter U(δ)T , as the

total number of jumps on the interval [0, T ].

• Simulate N independent RVs, Ti, uniformly distributed on the interval [0, T ],

as jump times.

• Simulate N jump sizes, Yi with distribution νδ(x)U(δ)

.

• Then the trajectory at time t is given by∑N

i=1 ITi≤tYi.

In order to simulate the sample paths of a symmetric TαS process with a Levy

measure given in Equation (5.3), we generate two independent TαS subordinators

via the CP approximation and subtract one from the other. The accuracy of the CP

approximation is determined by the jump truncation size δ.

The numerical experiments for this method will be given in Chapter 5.

2.4 Series representation to Levy jump processes

Let εj, ηj, and ξj be sequences of i.i.d. RVs such that P(εj = ±1) = 1/2, ηj ∼

Exponential(λ), and ξj ∼Uniform(0, 1). Let Γj be arrival times in a Poisson

process with rate one. Let Uj be i.i.d. uniform RVs on [0, T ]. Then, a TαS

process Lt with Levy measure given in Equation (5.3) can be represented as [156]:

Lt =+∞∑j=1

εj[(αΓj2cT

)−1/α ∧ ηjξ1/αj ]IUj≤t, 0 ≤ t ≤ T. (2.16)

Equation (5.14) converges almost surely as uniformly in t [153]. In numerical simu-

lations, we truncate the series in Equation (5.14) up to Qs terms. The accuracy of

Page 42: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

19

series representation approximation is determined by the number of truncations Qs.

The numerical experiments for this method will be given in Chapter 5.

Page 43: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

Chapter Three

Adaptive multi-element

polynomial chaos with discrete

measure: Algorithms and

applications to SPDEs

Page 44: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

21

We develop a multi-element probabilistic collocation method (ME-PCM) for arbi-

trary discrete probability measures with finite moments and apply it to solve partial

differential equations with random parameters. The method is based on numeri-

cal construction of orthogonal polynomial bases in terms of a discrete probability

measure. To this end, we compare the accuracy and efficiency of five different con-

structions. We develop an adaptive procedure for decomposition of the parametric

space using the local variance criterion. We then couple the ME-PCM with sparse

grids to study the Korteweg-de Vries (KdV) equation subject to random excitation,

where the random parameters are associated with either a discrete or a continuous

probability measure. Numerical experiments demonstrate that the proposed algo-

rithms lead to high accuracy and efficiency for hybrid (discrete-continuous) random

inputs.

3.1 Notation

µ, ν probability measure of discrete RVsξ discrete RV

Pi(ξ) generalized Polynomial Chaos basis functionδij Dirac delta functionS(µ) support of measure µ over discrete RV ξN size of the support S(µ)

αi, βi coefficients in the three term recurrence relation of orthogonal polynomial basismk the kith moment of RV ξΓ integration domain of the discrete RV

Wm,p(Γ) Sobolev spaceh size of element in multi-element integrationNes number of elements in multi-element integrationd number of quadrature points in Gauss quadrature ruleBi i-th element in the multi-element integrationσ2i local variance

Page 45: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

22

3.2 Generation of orthogonal polynomials for dis-

crete measures

Let µ be a positive measure with infinite support S(µ) ⊂ R and finite moments at

all orders, i.e., ∫S

ξnµ(dξ) <∞, ∀n ∈ N0, (3.1)

where N0 = 0, 1, 2, ..., and it is defined as a Riemann-Stieltjes integral. There

exists one unique [59] set of orthogonal monic polynomials Pi∞i=0 with respect to

the measure µ such that

∫S

Pi(ξ)Pj(ξ)µ(dξ) = δijγ−2i , i = 0, 1, 2, . . . , (3.2)

where γi 6= 0 are constants. In particular, the orthogonal polynomials satisfy a

three-term recurrence relation [33, 48]

Pi+1(ξ) = (ξ − αi)Pi(ξ)− βiPi−1(ξ), i = 0, 1, 2, . . . (3.3)

The uniqueness of the set of orthogonal polynomials with respect to µ can be also

derived by constructing such set of polynomials starting from P0(ξ) = 1. We typ-

ically choose P−1(ξ) = 0 and β0 to be a constant. Then the full set of orthogonal

polynomials is completely determined by the coefficients αi and βi.

If the support S(µ) is a finite set with data points τ1, ..., τN, i.e., µ is a discrete

measure defined as

µ =N∑i=1

λiδτi , λi > 0, (3.4)

Page 46: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

23

the corresponding orthogonality condition is finite, up to order N − 1 [51, 59], i.e.,

∫S

P 2i (ξ)µ(dξ) = 0, i ≥ N, (3.5)

where δτi indicates the empirical measure at τi, although by the recurrence relation

(3.3) we can generate polynomials at any order greater than N − 1. Furthermore,

one way to test whether the coefficients αi are well approximated is to check the

following relation [50, 51]N−1∑i=0

αi =N∑i=1

τi. (3.6)

One can prove that the coefficient of ξN−1 in PN(ξ) is −∑N−1

i=0 αi, and PN(ξ) =

(ξ − τ1)...(ξ − τN), therefore equation (3.6) holds [51].

We subsequently examine five different approaches of generating orthogonal poly-

nomials for a discrete measure and point out the pros and cons of each method. In

Nowak method, the coefficients of the polynomials are directly derived from solving

a linear system; in the other four methods, we generate coefficients αi and βi by four

different numerical methods, and the coefficients of polynomials are derived from the

recurrence relation in equation (3.3).

3.2.1 Nowak method

Define the k-th order moment as

mk =

∫S

ξkµ(dξ), k = 0, 1, ..., 2d− 1. (3.7)

Page 47: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

24

The coefficients of the d-th order polynomial Pd(ξ) =∑d

i=0 aiξi are determined by

the following linear system [139]

m0 m1 . . . md

m1 m2 . . . md+1

. . . . . . . . . . . .

md−1 md . . . m2d−1

0 0 . . . 1

a0

a1

. . .

ad−1

ad

=

0

0

. . .

0

1

, (3.8)

where the (d+ 1) by (d+ 1) Vandermonde matrix needs to be inverted.

Although this method is straightforward to implement, it is well known that the

matrix may be ill conditioned when d is very large.

The total computational complexity for solving the linear system in equation

(3.8) is O(d2) to generate Pd(ξ)1.

3.2.2 Stieltjes method

Stieltjes method is based on the following formulas of the coefficients αi and βi [59]

αi =

∫SξP 2

i (ξ)µ(dξ)∫SP 2i (ξ)µ(dξ)

, βi =

∫SξP 2

i (ξ)µ(dξ)∫SP 2i−1(ξ)µ(dξ)

, i = 0, 1, .., d− 1. (3.9)

For a discrete measure, the Stieltjes method is quite stable [59, 56]. When the

discrete measure has a finite number of elements in its support (N), the above

formulas are exact. However, if we use Stieltjes method on a discrete measure with

infinite support, i.e. Poisson distribution, we approximate the measure by a discrete

1Here we notice that the Vandermonde matrix is in a Toeplitz matrix form. Therefore thecomputational complexity of solving this linear system is O(d2) [64, 172].

Page 48: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

25

measure with finite number of points; therefore, each time when we iterate for αi

and βi, the error accumulates by neglecting the points with less weights. In that

case, αi and βi may suffer from inaccuracy when i is close to N [59].

The computational complexity for integral evaluation in equation (3.9) is of the

order O(N).

3.2.3 Fischer method

Fischer proposed a procedure for generating the coefficients αi and βi by adding

data points one-by-one [50, 51]. Assume that the coefficients αi and βi are known

for the discrete measure µ =∑N

i=1 λiδτi . Then, if we add another data point τ to

the discrete measure µ and define a new discrete measure ν = µ + λδτ , λ being the

weight of the newly added data point τ , the following relations hold [50, 51]:

ανi = αi + λγ2i Pi(τ)Pi+1(τ)

1 + λ∑i

j=0 γ2jP

2j (τ)

− λγ2i−1Pi(τ)Pi−1(τ)

1 + λ∑i−1

j=0 γ2jP

2j (τ)

(3.10)

βνi = βi[1 + λ

∑i−2j=0 γ

2jP

2j (τ)][1 + λ

∑ij=0 γ

2jP

2j (τ)]

[1 + λ∑i−1

j=0 γ2jP

2j (τ)]2

(3.11)

for i < N , and

ανN = τ − λγ2N−1PN(τ)PN−1(τ)

1 + λ∑N−1

j=0 γ2jP

2j (τ)

(3.12)

βνN =λγ2

N−1P2N(τ)[1 + λ

∑N−2j=0 γ2

jP2j (τ)]

[1 + λ∑N−1

j=0 γ2jP

2j (τ)]2

, (3.13)

where ανi and βνi indicate the coefficients in the three-term recurrence formula (3.3)

for the measure ν. The numerical stability of this algorithm depends on the stability

of the recurrence relations above, and on the sequence of data points added [51]. For

Page 49: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

26

example, the data points can be in either ascending or descending order. Fischer’s

method basically modifies the available coefficients αi and βi using the information

induced by the new data point. Thus, this approach is very practical when an

empirical distribution for stochastic inputs is altered by an additional possible value.

For example, let us consider that we have already generated d probability collocation

points with respect to the given discrete measure with N data points, and we want

to add another data point into the discrete measure to generate d new probability

collocation points with respect to the new measure. Using the Nowak method, we

will need to reconstruct the moment matrix and invert the matrix again with N + 1

data points; however by Fischer’s method, we will only need to update 2d values of

αi and βi by adding this new data point, which is more convenient.

We generate a new sequence of αi, βi by adding a new data point into the

measure, therefore the computational complexity for calculating the coefficients

γ2i , i = 0, .., d for N times is O(N2).

3.2.4 Modified Chebyshev method

Compared to the Chebyshev method [59], the modified Chebyshev method computes

moments in a different way. Define the quantities:

µi,j =

∫S

Pi(ξ)ξjµ(dξ), i, j = 0, 1, 2, ... (3.14)

Then, the coefficients αi and βi satisfy [59]:

α0 =µ0,1

µ0,0

, β0 = µ0,0, αi =µi,i+1

µi,i− µi−1,i

µi−1,i−1

, βi =µi,i

µi−1,i−1

. (3.15)

Page 50: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

27

Note that due to the orthogonality, µi,j = 0 when i > j. Starting from the moments

µj, µi,j can be computed recursively as

µi,j = µi−1,j+1 − αi−1µi−1,j − βi−1µi−2,j, (3.16)

with

µ−1,0 = 0, µ0,j = µj, (3.17)

where j = i, i+ 1, ..., 2d− i− 1.

However, this method suffers from the same effects of ill-conditioning as the

Nowak method [139] does, because they both rely on calculating moments. To sta-

bilize the algorithm we introduce another way of defining moments by polynomials:

µi,j =

∫S

Pi(ξ)pj(ξ)µ(dξ), (3.18)

where pi(ξ) is chosen to be a set of orthogonal polynomials, e.g., Legendre poly-

nomials. Define

νi =

∫S

pi(ξ)µ(dξ). (3.19)

Since pi(ξ)∞i=0 is not a set of orthogonal polynomials with respect to the measure

µ(dξ), νi is, in general, not equal to zero. For all the following numerical experiments

we used the Legendre polynomials for pi(ξ)∞i=0.2 Let αi and βi be the coefficients

in the three-term recurrence formula associated with the set pi of orthogonal poly-

nomials.

2Legendre polynomials pi(ξ)∞i=0 are defined on [−1, 1], therefore in implementation of theModified Chebyshev method, we scale the measure onto [−1, 1] first.

Page 51: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

28

Then, we initialize the process of building up the coefficients as

µ−1,j = 0, j = 1, 2, ..., 2d− 2,

µ0,j = νj, j = 0, 2, ..., 2d− 1,

α0 = α0 +ν1

ν0

, β0 = ν0,

and compute the following coefficients:

µi,j = µi−1,j+1 − (αi−1 − αj)µi−1,j − βi−1µi−2,j + βjµi−1,j−1, (3.20)

where j = i, i+ 1, ..., 2d− i− 1. The coefficients αi and βi can be obtained as

αi = αi +µi,i+1

µi,i− µi−1,i

µi−1,i−1

, βi =µi,i

µi−1,i−1

. (3.21)

Based on the modified moments, the ill-conditioning issue related to moments can

be improved, although such an issue can still be severe especially when we consider

orthogonality on infinite intervals.

The computational complexity for generating µi,j and νi is O(N).

3.2.5 Lanczos method

The idea of Lanczos method is to tridiagonalize a matrix to obtain the coeffi-

cients of the recurrence relation αj and βj. Suppose the discrete measure is µ =∑Ni=1 λiδτi , λi > 0. With weights λi and τi in the expression of the measure µ, the

Page 52: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

29

first step of this method is to construct a matrix [24]:

1√λ1

√λ2 . . .

√λN

√λ1 τ1 0 . . . 0√λ2 0 τ2 . . . 0

. . . . . . . . . . . . . . .√λN 0 0 . . . τN

. (3.22)

After we triagonalize it by the Lanczos algorithm, which is a process that reduces a

symmetric matrix into a tridiagonal form with unitary transformations [64], we can

obtain:

1√β0 0 . . . 0

√β0 α0

√β1 . . . 0

0√β1 α1 . . . 0

. . . . . . . . . . . . . . .

0 0 0 . . . αN−1

, (3.23)

where the non-zero entries correspond to the coefficients αi and βi. Lanczos method

is motivated by the interest in the inverse Sturm-Liouville problem: given some

information on the eigenvalues of the matrix with a highly structured form, or some

of its principal sub-matrices, this method is able to generate a symmetric matrix,

either Jacobi or banded, in a finite number of steps. It is easy to program but can

be considerably slow [24].

The computational complexity for the unitary transformation is O(N2).

Page 53: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

30

3.2.6 Gaussian quadrature rule associated with a discrete

measure

Here we describe how to utilize the above five methods to perform integration over

a discrete measure numerically, using the Gaussian quadrature rule [65] associated

with µ.

We consider integrals of the form

∫S

f(ξ)µ(dξ) <∞. (3.24)

With respect to µ, we generate the µ-orthogonal polynomials up to order d (d ≤

N − 1), denoted as Pi(ξ)di=0, by one of the five methods. We calculated the zeros

ξidi=1 from Pd(ξ) = adξd + ad−1ξ

d−1 + ...+ a0, as Gaussian quadrature points, and

Gaussian quadrature weights widi=1 by

wi =adad−1

∫Sµ(dξ)Pd−1(ξ)2

P ′d(ξi)Pd−1(ξi). (3.25)

Therefore, numerically the integral is approximated by

∫S

f(ξ)µ(dξ) ≈d∑i=1

f(ξi)wi. (3.26)

In the case when zeros for polynomial Pd(ξ) do not have explicit formulas,

Newton-Raphson is used [8, 189], with a specified tolerance as 10−16 (in double

precision). In order to ensure that at each search we find a new root, the polynomial

deflation method [93] is applied, where the searched roots are factored out of the

Page 54: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

31

initial polynomial once they have been determined. All the calculations are done

with double precision in this paper.

3.2.7 Orthogonality tests of numerically generated polyno-

mials

To investigate the stability of the five methods, we perform an orthogonality test,

where the orthogonality is defined as:

orth(i) =1

i

i−1∑j=0

|∫SPi(ξ)Pj(ξ)µ(dξ)|√∫

SP 2j (ξ)µ(dξ)

∫SP 2i (x)µ(dξ)

, i ≤ N − 1, (3.27)

for the set Pj(ξ)ij=0 of orthogonal polynomials constructed numerically. Note that∫SPi(ξ)Pj(ξ)µ(dξ) 6= 0, 0 ≤ j < i, for orthogonal polynomials constructed numeri-

cally due to round-off errors, although they should be orthogonal theoretically.

We compare the numerical orthogonality given by the aforementioned five meth-

ods in figure 3.1 for the following distribution: 3

f(k;n, p) = P(ξ =2k

n− 1) =

n!

k!(n− k)!pk(1− p)n−k, k = 0, 1, 2, ..., n. (3.28)

We see that Stieltjes, Modified Chebyshev, and Lanczos methods preserve the

best numerical orthogonality when the polynomial order i is close to N . We notice

that when N is large, the numerical orthogonality is preserved up to the order of 70,

indicating the robustness of these three methods. The Nowak method exhibits the

worst numerical orthogonality among the five methods, due to the ill-conditioning

3We rescale the support for Binomial distribution with parameters (n, p), 0, .., n, onto [−1, 1].

Page 55: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

32

0 2 4 6 8 10 12 14 16 18 2010 18

10 16

10 14

10 12

10 10

10 8

10 6

polynomial order i

orth

(i)

NowakStieltjesFischerModified ChebyshevLanczos

n=20, p=1/2

0 10 20 30 40 50 60 70 80 90 100

10 20

10 15

10 10

10 5

100

polynomial order i

orth

(i)

NowakStieltjesFischerModified ChebyshevLanczos

n=100, p=1/2

Figure 3.1: Orthogonality defined in (3.27) with respect to the polynomial order i up to 20 withdistribution defined in (3.28) (n = 20, p = 1/2) (left) and i up to 100 with (n = 100, p = 1/2)(right).

nature of the matrix in equation (3.8). The Fischer method exhibits better numerical

orthogonality when the number of data points N in the discrete measure is small.

The numerical orthogonality is lost when N is large, which serves as a motivation

to use ME-PCM instead of PCM for numerical integration over discrete measures.

Our results suggest that we shall use Stieltjes, Modified Chebyshev, and Lanczos

methods for more accuracy.

We also compare the cost by tracking the CPU time to evaluate (3.27) in figure

3.2: for a fixed polynomial order i, we track the CPU time with respect to N , the

number of points in the discrete measure defined in (3.28); for a fixed N , we track

the CPU time with respect to i. We observe that the Stieltjes method has the least

computational cost while the Fischer method has the largest computational cost.

Asymptotically, we observe that the computational complexity to evaluate (3.27)

is O(i2) for Nowak method, O(N) for the Stieltjes method, O(N2) for the Fischer

method, O(N) for the Modified Chebyshev method, and O(N2) for the Lanczos

method.

To conclude we recommend Stieltjes method as the most accurate and efficient

in generating orthogonal polynomials with respect to discrete measures, especially

Page 56: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

33

20 40 80 10010 4

10 3

10 2

10 1

100

n

CPU

tim

e to

eva

luat

e or

th(i)

NowakStieltjesFischerModified ChebyshevLanczosC1*n2

C2*n

p = 1/2i = 4

10 20 40 80 100

10 4

10 3

10 2

10 1

100

polynomial order i

CPU

tim

e to

eva

luat

e or

th(i)

NowakStieltjesFischerModified ChebyshevLanczos

C*i2n=100,p=1/2

Figure 3.2: CPU time (in seconds) on Intel (R) Core(TM) i5-3470 CPU @ 3.20 GHz in Matlab toevaluate orthogonality in (3.27) at the order i = 4 for distribution defined in (3.28) with parametern and p = 1/2 (left). CPU time to evaluate orthogonality in (3.27) at the order i for distributiondefined in (3.28) with parameter n = 100 and p = 1/2 (right).

when higher orders are required. However, for generating polynomials at lower orders

(for ME-PCM), the five methods are equally effective.

We noticed from figure 3.1 and 3.2 that the Stieltjes method exhibits the most

accuracy and efficiency in generating orthogonal polynomials with respect to a dis-

crete measure µ. Therefore, here we investigate the minimum polynomial order i

(i ≤ N − 1) that the orthogonality orth(i) defined in equation (3.27) of the Stieltjes

method is larger than a threshold ε. In figure 3.3, we perform this test on the distribu-

tion given by (3.28) with different parameters for n (n ≥ i). The highest polynomial

order i for polynomial chaos shall be less than the minimum i that orth(i) exceeds a

certain desired ε, for practical computations. The cost for numerical orthogonality

is, in general, negligible compared to the cost for solving a stochastic problem by

either Galerkin or collocation approaches. Hence, we can pay more attention on the

accuracy, rather than the cost, of these five methods.

Page 57: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

34

0 20 40 60 80 100 120 140 1600

20

40

60

80

100

120

140

160

n (p=1/10) for measure defined in (28)

poly

nom

ial o

rder

i

=1E 8=1E 10=1E 13

i = n

Figure 3.3: Minimum polynomial order i (vertical axis) such that orth(i) defined in (3.27) isgreater than a threshold value ε (here ε = 1E − 8, 1E − 10, 1E − 13), for distribution defined in(3.28) with p = 1/10. Orthogonal polynomials are generated by the Stieltjes method.

3.3 Discussion about the error of numerical inte-

gration

3.3.1 Theorem of numerical integration on discrete measure

In [55], the h-convergence rate of ME-PCM [93] for numerical integration in terms

of continuous measures was established with respect to the degree of exactness given

by the quadrature rule.

Let us first define the Sobolev space Wm+1,p(Γ) to be the set of all functions

f ∈ Lp(Γ) such that for every multi-index γ with |γ| ≤ m + 1, the weak partial

derivative Dγf belongs to Lp(Γ) [1, 44], i.e.

Wm+1,p(Γ) = f ∈ Lp(Γ) : Dγf ∈ Lp(Γ),∀|γ| ≤ m+ 1. (3.29)

Page 58: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

35

Here Γ is an open set in Rn and 1 ≤ p ≤ +∞. The natural number m + 1 is called

the order of the Sobolev space Wm+1,p(Γ). Here the Sobolev space Wm+1,∞(A) in

the following theorem is defined for functions f : A→ R subject to the norm:

‖f‖m+1,∞,A = max|γ|≤m+1

ess supξ∈A|Dγf(ξ)|,

and the seminorm is defined as:

|f |m+1,∞,A = max|γ|=m+1

ess supξ∈A|Dγf(ξ)|,

where A ⊂ Rn, γ ∈ Nn0 , |γ| = γ1 + . . .+ γn and m+ 1 ∈ N0.

We first consider a one-dimensional discrete measure µ =∑N

i=1 λiδτi , where N is a

finite number. For simplicity and without loss of generality, we assume that τiNi=1 ⊂

(0, 1). Otherwise, we can use a linear mapping to map (minτiNi=1−c,maxτiNi=1+c)

to (0, 1) with c being a arbitrarily small positive number. We then construct the

approximation of the Dirac measure as

µε =N∑i=1

λiηετi, (3.30)

where ε is a small positive number and ηετi is defined as

ηετi =

if |ξ − τi| < ε/2,

0 otherwise.(3.31)

First of all, ηετi defines a continuous measure in (0, 1) with a finite number of discon-

tinuous points, where a uniform distribution is taken on the interval (τi−ε/2, τi+ε/2).

Page 59: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

36

Second, ηετi converges to δτi in the weak sense, i.e.,

limε→0+

∫ 1

0

g(ξ)ηετi(dξ) =

∫ 1

0

g(ξ)δτi(dξ), (3.32)

for all bounded continuous functions g(ξ). We write that

limε→0+

ηετi = δτi . (3.33)

It is seen that when ε is small enough, the intervals (τi−ε/2, τi+ε/2) can be mutually

disjoint for i = 1, . . . , N . Due to the linearity, we have

limε→0+

µε = µ, (3.34)

and the convergence is defined in the weak sense as before. Then, µε is also a

continuous measure with a finite number of discontinuous points. The choice for ηετi

is not unique. Another choice is

ηετi =1

εη

(ξ − τiε

), η(ξ) =

e− 1

1−|ξ|2 if |ξ| < 1,

0 otherwise.(3.35)

Such a choice is smooth. When ε is small enough, the domains defined by | ξ−τiε| < 1

are also mutually disjoint.

We then have the following proposition.

Proposition 1. For the continuous measure µε, we let αi,ε and βi,ε indicate the

coefficients in the three-term recurrence formula (3.3), which is valid for both con-

tinuous and discrete measures. For the discrete measure µ, we let αi and βi indicate

Page 60: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

37

the coefficients in the three-term recurrence formula. We then have

limε→0+

αi,ε = αi, limε→0+

βi,ε = βi. (3.36)

In other words, the monic orthogonal polynomials defined by µε will converge to those

defined by µ, i.e

limε→0+

Pi,ε(ξ) = Pi(ξ), (3.37)

where Pi,ε and Pi are monic polynomials of order i corresponding to µε and µ, re-

spectively.

The coefficients αi,ε and βi,ε are given by the formula, see equation (3.9),

αi,ε =(ξPi,ε, Pi,ε)µε(Pi,ε, Pi,ε)µε

, i = 0, 1, 2, . . . , (3.38)

βi,ε =(Pi,ε, Pi,ε)µε

(Pi−1,ε, Pi−1,ε)µε, i = 1, 2, . . . , (3.39)

where (·, ·)µε indicates the inner product with respect to µε. Correspondingly, we

have

αi =(ξPi, Pi)µ(Pi, Pi)µ

, i = 0, 1, 2, . . . , (3.40)

βi =(Pi, Pi)µ

(Pi−1,i−1)µ, i = 1, 2, . . . , (3.41)

By definition,

β0,ε = (1, 1)µε = 1, β0 = (1, 1)µ = 1.

The argument is based on induction. We assume that the equation (3.37) is true

for k = i and k = i − 1. When i = 0, this is trivial. To show that equation

(3.37) holds for k = i + 1, we only need to prove equation (3.36) for k = i based

on the observation that Pi+1,ε = (ξ − αi,ε)Pi,ε − βi,εPi−1,ε. We now show that all

Page 61: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

38

inner products in equations (3.38) and (3.39) converges to the corresponding inner

products in equations (3.40) and (3.41) as ε→ 0+. We here only consider (Pi,ε, Pi,ε)µε

and other inner products can be dealt with in a similar way. We have

(Pi,ε, Pi,ε)µε = (Pi, Pi)µε + 2(Pi, Pi,ε − Pi)µε + (Pi,ε − Pi, Pi,ε − Pi)µε

We then have (Pi, Pi)µε → (Pi, Pi)µ due to the definition of µε. The second term on

the right-hand side can be bounded as

|(Pi, Pi,ε − Pi)µε| ≤ ess supξPiess supξ(Pi,ε − Pi)(1, 1)µε .

According to the assumption that Pi,ε → Pi, the right-hand side of the above in-

equality goes to zero. Similarly, (Pi,ε − Pi, Pi,ε − Pi)µε goes to zero. We then have

(Pi,ε, Pi,ε)µε → (Pi, Pi)µ. The conclusion is then achieved by induction.

Remark 1. Since as ε→ 0+, the orthogonal polynomials defined by µε will converge

to those defined by µ. The (Gauss) quadrature points and weights defined by µε

should also converge to those defined by µ.

We then recall the following theorem for continuous measures.

Theorem 1 ([55]). Suppose f ∈ Wm+1,∞(Γ) with Γ = (0, 1)n, and BiNei=1 is a

non-overlapping mesh of Γ. Let h indicate the maximum side length of each element

and QΓm a quadrature rule with degree of exactness m in domain Γ. (In other words

Qm exactly integrates polynomials up to order m). Let QAm be the quadrature rule in

subset A ⊂ Γ, corresponding to QΓm through an affine linear mapping. We define a

linear functional on Wm+1,∞(A) :

EA(g) ≡∫A

g(ξ)µ(dξ)−QAm(g), (3.42)

Page 62: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

39

whose norm in the dual space of Wm+1,∞(A) is defined as

‖EA‖m+1,∞,A = sup‖g‖m+1,∞,A≤1

|EA(g)|. (3.43)

Then, the following error estimate holds:

∣∣∣∣∣∫

Γ

f(ξ)µ(dξ)−Ne∑i=1

QBim f

∣∣∣∣∣ ≤ Chm+1‖EΓ‖m+1,∞,Γ|f |m+1,∞,Γ (3.44)

where C is a constant and ‖EΓ‖m+1,∞,Γ refers to the norm in the dual space of

Wm+1,∞(Γ), which is defined in equation (3.43).

For discrete measures, we have the following theorem.

Theorem 2. Suppose the function f satisfies all assumptions required by Theorem 1.

We add the following three assumptions for discrete measures: 1) The measure µ can

be expressed as a product of n one-dimensional discrete measures, i.e., we consider n

independent discrete random variables; 2) The quadrature rule QAm can be generated

from the quadrature rules given by the n one-dimensional discrete measures by the

tensor product; 3) The number of all the possible values for the discrete measure µ

is finite and they are located within Γ. We then have

∣∣∣∣∣∫

Γ

f(ξ)µ(dξ)−Ne∑i=1

QBim f

∣∣∣∣∣ ≤ CN−m−1es ‖EΓ‖m+1,∞,Γ|f |m+1,∞,Γ, (3.45)

where Nes indicates the number of integration elements for each random variable.

The argument is based on Theorem 1 and the approximation µε of µ. Since we

assume that µ is given by n independent discrete random variables, we can define

a continuous approximation (see equation (3.30)) for each one-dimensional discrete

measure and µε can be naturally chosen as the product of these n continuous one-

Page 63: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

40

dimensional measures.

We then consider

∣∣∣∣∣∫

Γ

f(ξ)µ(dξ)−Ne∑i=1

QBim f

∣∣∣∣∣ ≤∣∣∣∣∫

Γ

f(ξ)µ(dξ)−∫

Γ

f(ξ)µε(dξ)

∣∣∣∣+

∣∣∣∣∣∫

Γ

f(ξ)µε(dξ)−Ne∑i=1

Qε,Bim f

∣∣∣∣∣+

∣∣∣∣∣Ne∑i=1

Qε,Bim f −Ne∑i=1

QBim f

∣∣∣∣∣ ,where Qε,Bim defines the corresponding quadrature rule for the continuous measure

µε. Since we assume that the quadrature rules Qε,Bim and QBim can be constructed by

n one-dimensional quadrature rules, Qε,Bim should converge to QBim as ε goes to zero

based on Proposition 1 and the fact that the construction procedure for Qε,Bim and

QBim to have a degree of exactness m is measure independent. For the second term

on the right-hand side, theorem 1 can be applied with a well-defined h because we

assume that all possible values for µ are located within Γ, otherwise, this assumption

can be achieved by a linear mapping. We then have

∣∣∣∣∣∫

Γ

f(ξ)µε(dξ)−Ne∑i=1

Qε,Bim f

∣∣∣∣∣ ≤ Chm+1‖EεΓ‖m+1,∞,Γ|f |m+1,∞,Γ, (3.46)

where EεΓ is a linear functional defined with respect to µε. We then let ε → 0+. In

the error bound given by equation (3.46), only ‖EεΓ‖m+1,∞,Γ is associated with µε.

According to its definition and noting that Qε,Am → QAm,

limε→0

EεA(g) = lim

ε→0

(∫A

g(ξ)µε(dξ)−Qε,Am (g)

)= EA(g),

which is a linear functional with respect to µ. Since µε → µ and Qε,Bim → QBim , the

first and third term will go to zero. However, since we are working with discrete

Page 64: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

41

measures, it is not convenient to use the element size. Instead we use the number of

elements since h ∝ N−1es , where Nes indicates the number of elements per side. Then

the conclusion is reached.

The h-convergence rate of ME-PCM for discrete measures takes the form O(N−(m+1)es

).

If we employ Gauss quadrature rule with d points, the degree of exactness is m =

2d− 1, which corresponds to a h-convergence rate N−2des . The extra assumptions in

Theorem 2 are actually quite practical. In applications, we often consider i.i.d ran-

dom variables and the commonly used quadrature rules for high-dimensional cases,

such as tensor-product rule and sparse grids, are obtained from one-dimensional

quadrature rules.

3.3.2 Testing numerical integration with on RV

We now verify the h-convergence rate numerically. We employ the Lanczos method [24]

to generate the Gauss quadrature points. We then approximate integrals of GENZ

functions [61] with respect to the binomial distribution Bino(n = 120, p = 1/2) using

ME-PCM. We consider the following one-dimensional GENZ functions:

• GENZ1 function deals with oscillatory integrands:

f1(ξ) = cos(2πw + cξ), (3.47)

• GENZ4 function deals with Gaussian-like integrands:

f4(ξ) = exp(−c2(ξ − w)2), (3.48)

Page 65: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

42

0 20 40 60 80 1001

0.8

0.6

0.4

0.2

0

0.2

0.4

0.6

0.8

1GENZ1 function (oscillations)

w=1, c=0.01w=1,c=0.1w=1,c=1

100 10110 6

10 5

10 4

10 3

10 2

Nes

abso

lute

erro

r

c=0.1,w=1

GENZ1d=2m=3bino(120,1/2)

Figure 3.4: Left: GENZ1 functions with different values of c and w; Right: h-convergence ofME-PCM for function GENZ1. Two Gauss quadrature points, d = 2, are employed in each elementcorresponding to a degree m = 3 of exactness. c = 0.1, w = 1, ξ ∼ Bino(120, 1/2). Lanczos methodis employed to compute the orthogonal polynomials.

where c and w are constants. Note that both GENZ1 and GENZ4 functions are

smooth. In this section, we consider the absolute error defined as |∫Sf(ξ)µ(dξ) −∑d

i=1 f(ξi)wi|, where ξi and wi (i = 1, ..., d) are d Gauss quadrature points and

weights with respect to µ.

In figures 3.4 and 3.5, we plot the h-convergence behavior of ME-PCM for GENZ1

and GENZ4 functions, respectively. In each element, two Gauss quadrature points

are employed, corresponding to a degree 3 of exactness, which means that the h-

convergence rate should be N−4es . In figures 3.4 and 3.5, we see that when Nes is large

enough, the h-convergence rate of ME-PCM approaches the theoretical prediction,

demonstrated by the reference straight lines CN−4es .

3.3.3 Testing numerical integration with multiple RVs on

sparse grids

An interesting question is if the sparse grid approach is as effective for discrete mea-

sures as it is for continuous measures [185], and how that compares to the tensor

Page 66: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

43

0 20 40 60 80 100 1200

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1GENZ4 function (Gaussian)

c=0.01,w=1c=0.1,w=1c=1,w=1

100 10110 13

10 12

10 11

10 10

10 9

Nes

abso

lute

erro

rs

c=0.1,w=1

GENZ4d=2m=3bino(120,1/2)

Figure 3.5: Left: GENZ4 functions with different values of c and w; Right: h-convergence ofME-PCM for function GENZ4. Two Gauss quadrature points, d = 2, are employed in each elementcorresponding to a degree m = 3 of exactness. c = 0.1, w = 1, ξ ∼ Bino(120, 1/2). Lanczos methodis employed for numerical orthogonality.

product grids. Let us denote the sparse grid level by k and the dimension by n.

Assume that each random dimension is independent. We apply the Smolyak algo-

rithm [164, 128, 129] to construct sparse grids, i.e.,

A(k + n, n) =∑

k+1≤|i|≤k+n

(−1)k+n−|i|

n− 1

k + n− |i|

(U i1 ⊗ ...⊗ U in), (3.49)

where A(k+ n, n) defines a cubature formula with respect to the n-dimensional dis-

crete measure and U ij defines the quadrature rule of i-th level for the j-th dimension

[185].

We use Gauss quadrature rule to define U ij , which implies that the grids at

different levels are not necessarily nested. Two-dimensional non-nested sparse grid

points are plotted in figure 3.6, where each dimension has the same discrete measure

as binomial distribution Bino(10, 1/2). We then use sparse grids to approximate the

integration of the following two GENZ functions with M RVs [61]:

Page 67: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

44

1 2 3 4 5 6 7 8 91

2

3

4

5

6

7

8

9

1 bino(10,1/2)

2bi

no(1

0,1/

2)

k=3

1 2 3 4 5 6 7 8 91

2

3

4

5

6

7

8

9

1 bino(10,1/2)

2bi

no(1

0,1/

2)

k=4

0 2 4 6 8 100

1

2

3

4

5

6

7

8

9

10

1 bino(10,1/2)

2bi

no(1

0,1/

2)

k=5

0 2 4 6 8 100

1

2

3

4

5

6

7

8

9

10

1 bino(10,1/2)2

bino

(10,

1/2)

k=6

Figure 3.6: Non-nested sparse grid points with respect to sparseness parameter k = 3, 4, 5, 6 forrandom variables ξ1, ξ2 ∼ Bino(10, 1/2), where the one-dimensional quadrature formula is basedon Gauss quadrature rule.

• GENZ1

f1(ξ1, ξ2, ..., ξM) = cos(2πw1 +M∑i=1

ciξi) (3.50)

• GENZ4

f4(ξ1, ξ2, ..., ξM) = exp[−M∑i=1

c2i (ξi − wi)2] (3.51)

where ci and wi are constants. We compute E[fi(ξ1, ξ2, ..., ξM)] under the assumption

that ξi, i = 1, ...,M are M independent identically distributed (i.i.d.) random

variables. The absolute errors versus the total number of sparse grid points r(k)

with k being the sparse grid level, are plotted in figure 3.7 and 3.8, for two RVs

and eight RVs respectively. We see that the sparse grids for discrete measures work

well for smooth GENZ1 and GENZ4 functions, and the convergence rate is much

faster than the Monte Carlo simulations with a convergence rate O(r(k)−1/2). In

Page 68: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

45

20 40 60 808010 16

10 14

10 12

10 10

10 8

10 6

r(k)

abso

lute

erro

r

C*r(k) 7.9272

sparse gridtensor product grid

Genz1Sparse2d

1,2 bino(10,1/2)c1,2=0.1,w1,2=1

20 40 60 8010 13

10 12

10 11

10 10

10 9

10 8

10 7

10 6

10 5

r(k)

abso

lute

erro

r

sparse grid

C*r(k) 6.8369

tensor product grid

Genz4Sparse2d

1,2 bino(10,1/2)c1,2=0.1,w1,2=1

Figure 3.7: Convergence of sparse grids and tensor product grids to approximate E[fi(ξ1, ξ2)],where ξ1 and ξ2 are two i.i.d. random variables associated with a distribution Bino(10, 1/2). Left:f1 is GENZ1 Right: f4 is GENZ4. Orthogonal polynomials are generated by Lanczos method.

17 153 969 484510 10

10 9

10 8

10 7

10 6

10 5

10 4

10 3

r(k)

abso

lute

erro

r

sparse gridtensor product grid

Genz1sparse 8d

1,...,8 Bino(5,1/2)c1,...,8=0.1w1,...,8=1

17 153 969 484510 8

10 7

10 6

10 5

10 4

10 3

10 2

r(k)

abso

lute

erro

r

sparse gridtensor product grid

Genz4sparse 8d

1,...,8 Bino(5,1/2)c1,...,8=0.1w1,...,8=1

Figure 3.8: Convergence of sparse grids and tensor product grids to approximateE[fi(ξ1, ξ2, ..., ξ8)], where ξ1,...,ξ8 are eight i.i.d. random variables associated with a distributionBino(10, 1/2). Left: f1 is GENZ1 Right: f4 is GENZ4. Orthogonal polynomials are generated byLanczos method.

low dimensions, it is known that integration on sparse grids converges slower than

on tensor product grids [185] for continuous measures based on numerical tests. We

observe the same trend in figure 3.7 for discrete measures. The error line from the

tensor product grid has a slight up bending at its tail because the error is near the

machine error (1E − 16). In higher dimensions sparse grids are more efficient than

tensor product grids as in figure 3.8 for discrete measures. Later, we will obtain the

numerical solution of the KdV equation with eight RVs, where sparse grids are also

more accurate than tensor product grids.

Page 69: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

46

3.4 Application to stochastic reaction equation and

KdV equation

For numerical experiments on SPDEs, we choose one method among Nowak, Stielt-

jes, Fischer, and Lanczos methods to generate orthogonal polynomials, in order to

calculate the moment statistics by Gaussian quadrature rule associated with the

discrete measure. Other methods will provide identical results.

3.4.1 Reaction equation with discrete random coefficients

We first consider the reaction equation with a random coefficient:

dy(t; ξ)

dt= −ξy(t; ξ), (3.52)

with initial condition

y(0; ξ) = y0, (3.53)

where ξ is a random coefficient. Let us define the error of mean and variance of the

solution to be

εmean(t) = |EPCM[y(t)]− Eexact[y(t)]

Eexact[y(t)]|, (3.54)

and

εvar(t) = |V arPCM[y(t)]− V arexact[y(t)]

V arexact[y(t)]| (3.55)

.

Page 70: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

47

The exact value of the m-th moment of the solution is:

E[ym(t; ξ)] = E[(y0e−ξt)m]. (3.56)

The error defined in equations (3.54) and (3.55) of solution for equation (3.52) has

been considered in the literature by gPC [184] with Wiener-Askey polynomials [6]

with respect to discrete measures. Here instead of using hypergeometric polynomials

in the Wiener-Askey scheme, we solve equation (3.52) by PCM with collocation

points generated by the Stieltjes method. The p-convergence is demonstrated in

figure 3.9 for the negative binomial distribution with β = 1, c = 12. We observe

spectral convergence by polynomial chaos with orthogonal polynomials generated by

the Stieltjes method, and the method is accurate up to order 15 here.

0 5 10 15 20 25 3010 14

10 12

10 10

10 8

10 6

10 4

10 2

100

d

erro

rs

meanStieltjes

varStieltjes

Figure 3.9: p-convergence of PCM with respect to errors defined in equations (3.54) and (3.55)for the reaction equation with t = 1, y0 = 1. ξ is associated with negative binomial distributionwith c = 1

2 and β = 1. Orthogonal polynomials are generated by the Stieltjes method.

Page 71: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

48

3.4.2 KdV equation with random forcing

Exact solution and KdV solver

We subsequently consider the KdV equation subject to stochastic forcing:

ut + 6uux + uxxx = σξ, x ∈ R, (3.57)

with initial condition:

u(x, 0) =a

2sech2(

√a

2(x− x0)), (3.58)

where a is associated with the speed of the soliton, x0 is the initial position of the

soliton, and σ is a constant that scales the variance of the random variable (RV) ξ.

The m-th moment of the solution is:

E [um(x, t; ξ)] = E[(

a

2sech2(

√a

2(x− 3σξt2 − x0 − at)) + σξt

)m]. (3.59)

The exact solution for the m-th moment of solution can be performed by a simple

stochastic transformation:

W (t;ω) =

∫ t

0

σξdτ = σξt, (3.60)

U(x, t;ω) = u(x, t)−W (t;ω) = u(x, t)− σξt, (3.61)

X = x− 6

∫ t

0

W (τ ;ω)dτ = x− 3σξt2, (3.62)

such that

∂U

∂t+ 6U

∂U

∂X+∂3U

∂X3= 0, (3.63)

Page 72: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

49

50 40 30 20 10 0 10 20 30 40 500

0.2

0.4

0.6

0.8

1

1.2

1.4

x

u(x,

t)

t=0t=1

50 40 30 20 10 0 10 20 30 40 5010 16

10 15

10 14

10 13

10 12

10 11

10 10

x

|uex(x,t=1)

u num(x,t=1)|

Figure 3.10: Left: exact solution of the KdV equation (3.65) at time t = 0, 1. Right: the pointwiseerror for the soliton at time t = 1

.

which has an exact solution

U(X, t) =a

2sech2(

√a

2(X − x0 − at)) (3.64)

On each collocation point for the RV ξ we run a deterministic solver of the KdV equa-

tion with the Fourier-collocation discretization in physical space, and time splitting

scheme like this: we first compute third-order Adams-Bashforth scheme for 6uux

term and then Crank-Nicolson scheme for uxxx in time. We test the accuracy of the

deterministic solver using the following problem:

ut + 6uux + uxxx = 1 (3.65)

with the initial condition:

u(x, 0) =a

2sech2(

√a

2(x− x0)), (3.66)

where a = 0.3, x0 = −5, and t = 1, and the time step is 1.25× 10−5. For the spatial

discretization, we use 300 Fourier collocation points on an interval [−50, 50]. The

point-wise numerical error is plotted in figure 3.10.

Page 73: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

50

2 3 4 5 6 7 8

10 3

10 2

d

error

l2u1aPCl2u2aPC

Figure 3.11: p-convergence of PCM with respect to errors defined in equations (3.67) and (3.68)for the KdV equation with t = 1. a = 1, x0 = −5 and σ = 0.2, with 200 Fourier collocationpoints on the spatial domain [−30, 30]. Left: ξ ∼Pois(10); Right: ξ ∼ Bino(n = 5, p = 1/2)).aPC stands for arbitrary Polynomial Chaos, which is Polynomial Chaos with respect to arbitrarymeasure. Orthogonal polynomials are generated by Fischer’s method.

hp-convergence of ME-PCM

To examine the convergence of ME-PCM, we define the following normalized L2

errors for the mean and the second-moment as:

l2u1 =

√∫dx(E[unum(x, t; ξ)]− E[uex(x, t; ξ)])2√∫

dx(E[uex(x, t; ξ)])2

, (3.67)

l2u2 =

√∫dx(E[u2

num(x, t; ξ)]− E[u2ex(x, t; ξ)])2√∫

dx(E[u2ex(x, t; ξ)])2

, (3.68)

where unum and uex indicate the numerical and exact solutions, respectively.

We solve equation (3.57) by PCM with collocation points generated by Fischer’s

method. The p-convergence is demonstrated in figure 3.11 for distributions Pois(10)

and Bino(n = 5, p = 1/2), respectively, with respect to errors defined in equations

(3.67) and (3.68). For the h-convergence of ME-PCM we examine the distribution

Bino(n = 120, p = 1/2), where each element contains the same number of discrete

data points. Furthermore, in each element we employ two Gauss quadrature points

Page 74: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

51

2 3 5 10 15 20 3010 5

10 4

10 3

10 2

Nes

error

l2u1l2u2C*Nes

4

2 3 5 10 15 20 3010−7

10−6

10−5

10−4

10−3

10−2

Nes

error

l2u1

l2u2

C*Nel−4

Figure 3.12: h-convergence of ME-PCM with respect to errors defined in equations (3.67) and(3.68) for the KdV equation with t = 1.05, a = 1, x0 = −5, σ = 0.2, and ξ ∼ Bino(n = 120, p =1/2), with 200 Fourier collocation points on the spatial domain [−30, 30], where two collocationpoints are employed in each element. Orthogonal polynomials are generated by the Fischer method(left) and the Stieltjes method (right).

for the gPC approximation. We see in figure 3.12 that the desired h-convergence

rate N−4es is obtained for both Stieltjes and Fischer method. We note that all five

methods exhibit the same convergence rate and the same error level except the

Fischer method, which exhibits errors by two orders of magnitude larger. To explain

this, we refer to figure 3.1, which shows that if the number of points is large, the

orthogonality condition in Fischer’s method suffers from the round-off errors.

hp-convergence of adaptive ME-PCM

We now consider the adaptive ME-PCM, where the local variance criterion for adap-

tivity is employed. First, let us define the local variance. For any RV ξ with a

probability measure µ(dξ) on the parametric space ξ ∈ Γ, we consider a decompo-

sition of Γ = ∪Nei Bi such that Bi ∩ Bj = ∅, ∀i 6= j. On the element Bi, we can

calculate the local variance σ2i with respect to the associated conditional measure

as µ(dξ)/∫Biµ(dξ). We then consider an adaptive decomposition of the parametric

space for ME-PCM such that the quantity σ2i Pr(ξ ∈ Bi) in each element is nearly

uniform. Here for the numerical experiments in figure 3.13, we typically minimized

Page 75: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

52

the quantity∑Ne

i=1 σ2i Pr(ξ ∈ Bi). In other words, given a discrete measure and num-

ber of elements Ne, we try all possible Bi, i = 1..Ne to divide Γ until the sum∑Nei=1 σ

2i Pr(ξ ∈ Bi) is minimized. We found that the size of the element is balanced

by the local oscillations and the probability of ξ ∈ Bi (see more details in [55]).

A five-element adaptive decomposition of the parametric space for the distribution

ξ ∼ Pois(40) is given in figure 3.13. We see that in the region of small probability,

the element size is large while in the region of high probability, the element size

is much smaller. We then examine the effectiveness of adaptivity. Consider a uni-

Figure 3.13: Adapted mesh with five elements with respect to Pois(40) distribution.

form mesh and an adapted one, which have the same number of elements and the

same number of collocation points within each element. In figure 3.14, we plot the

p-convergence behavior of ME-PCM given by the uniform and adapted meshes. We

see that although both meshes yield exponential convergence, the adapted mesh re-

sults in a better accuracy especially when the number of elements is relatively small.

In other words, for a certain accuracy, the adapted ME-PCM can be more efficient

than ME-PCM on a uniform mesh.

Page 76: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

53

2 3 4 5 6

10 5

10 4

10 3

Number of PCM points on each element

erro

rs

2 el, even grid2 el, uneven grid4 el, even grid2 el, uneven grid5 el, even grid5 el, uneven grid

2 3 4 5 6

10 5

10 4

10 3

10 2

Number of PCM points on each element

erro

rs

2 el, even grid2 el, uneven grid4 el, even grid4 el, uneven grid5 el, even grid5 el, uneven grid

Figure 3.14: p-convergence of ME-PCM on a uniform mesh and an adapted mesh with respectto errors defined in equations (3.67) and (3.68) for the KdV equation with t = 1, a = 1, x0 = −5,σ = 0.2, and ξ ∼ Pois(40), with 200 Fourier collocation points on the spatial domain [−30, 30]. Left:Errors of the mean. Right: Errors of the second moment. Orthogonal polynomials are generatedby the Nowak method.

Stochastic excitation given by two discrete RVs

We now use sparse grids to study the KdV equation subject to stochastic excitation:

ut + 6uux + uxxx = σ1ξ1 + σ2ξ2, x ∈ R, (3.69)

with the same initial condition given by equation (3.58), where ξ1 and ξ2 are two

i.i.d. random variables associated with a discrete measure.

In figure 3.15, we plot the convergence behavior of sparse grids and tensor product

grids for problem (3.69), where the discrete measure is chosen as Bino(10, 1/2). We

see that with respect to the total number r(k) collocation points an algebraic-like

convergence is obtained with the rate slower than tensor product grid with respect

to the total number of PCM collocation points, in lower dimension, consistent with

the results in figure 3.7. Specifically the error line for l2u1 and l2u2 become flat

mainly due to the fact that the numerical errors from spatial discretization and

temporal integration for the deterministic KdV equation become dominant when

r(k) is relatively large.

Page 77: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

54

13 30 55 9110 9

10 8

10 7

10 6

10 5

10 4

10 3

10 2

r(k)

erro

rs

l2u1(sparse grid)l2u2(sparse grid)l2u1(tensor product grid)l2u2(tensor product grid)

Figure 3.15: ξ1, ξ2 ∼ Bino(10, 1/2): convergence of sparse grids and tensor product grids withrespect to errors defined in equations (3.67) and (3.68) for problem (3.69), where t = 1, a = 1,x0 = −5, and σ1 = σ2 = 0.2, with 200 Fourier collocation points on the spatial domain [−30, 30].Orthogonal polynomials are generated by the Lanczos method.

Stochastic excitation given by a discrete RV and a continuous RV

We still consider equation (3.69), where we only require the independence between

ξ1 and ξ2, and assume that ξ1 ∼ Bino(10, 1/2) is a discrete RV and ξ2 ∼ N (0, 1) is

a continuous RV.

In figure 3.16, we plot the convergence behavior of sparse grids and tensor product

grids for the KdV equation subject to hybrid (discrete/continuous) random inputs.

Similar phenomena are observed as in the previous case where both RVs are discrete.

An algebraic-like convergence rate with respect to the total number of grid points

is obtained, which is slower than convergence from PCM on tensor product grids

in lower dimension, in agreement with the results in figure 3.7. This numerical

example demonstrates that the sparse grids approach can be applied to deal with

hybrid (discrete/continuous) random inputs when the solution is smooth enough.

Page 78: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

55

10 20 30 40 50 60 70 80 9010 7

10 6

10 5

10 4

10 3

10 2

r(k)

erro

rs

l2u1(sparse grid)l2u2(sparse grid)l2u1(tensor product grid)l2u2(tensor product grid)

Figure 3.16: ξ1 ∼ Bino(10, 1/2) and ξ2 ∼ N (0, 1): convergence of sparse grids and tensor productgrids with respect to errors defined in in equations (3.67) and (3.68) for problem (3.69), where t = 1,a = 1, x0 = −5, and σ1 = σ2 = 0.2, with 200 Fourier collocation points on the spatial domain[−30, 30]. Orthogonal polynomials are generated by Lanczos method.

Stochastic excitation given by eight discrete RVs

We finally examine a higher-dimensional case:

ut + 6uux + uxxx =8∑i=1

σiξi, x ∈ R (3.70)

with the initial condition given in equation (3.58), where the stochastic excitation is

subject to eight i.i.d. discrete RVs of the same Binomial distribution Bino(5, 1/2).

We plot the convergence behavior of sparse grids and tensor product grids for

problem (3.70) in figure 3.17. We see that as the number of dimensions increases, the

rate of algebraic-like convergence from PCM with sparse grids and tensor product

grids both becomes slower. However, with higher dimensional randomness, the sparse

grids outperform the tensor product grids in terms of accuracy.

Page 79: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

56

17 153 256 969 4,84510 10

10 9

10 8

10 7

10 6

10 5

10 4

r(k)

erro

rs

l2u1(sparse grid)l2u2(sparse grid)l2u1(tensor product grid)l2u2(tensor product grid)

Figure 3.17: Convergence of sparse grids and tensor product grids with respect to errors definedin in equations (3.67) and (3.68) for problem (3.70), where t = 0.5, a = 0.5, x0 = −5, σi = 0.1 andξi ∼ Bino(5, 1/2), i = 1, 2, ..., 8, with 300 Fourier collocation points on the spatial domain [−50, 50].Orthogonal polynomials are generated by Lanczos method.

3.5 Conclusion

In this chapter, we presented a multi-element probabilistic collocation method (ME-

PCM) for discrete measures, where we focus on the h-convergence with respect to the

number of elements and the convergence behavior of the associated sparse grids based

on the one-dimensional Gauss quadrature rule. We first compared five methods of

constructing orthogonal polynomials for discrete measures. From numerical exper-

iments, we conclude that the Stieltjes, Modified Chebyshev, and Lanczos methods

generate polynomials that exhibit the best orthogonality among the five methods.

For computational cost, we conclude that Stieltjes method has the least computa-

tional cost in the case that we have examined.

The relation between h-convergence and the degree of exactness given by a cer-

tain quadrature rule was discussed for ME-PCM with respect to discrete measures.

Page 80: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

57

The h-convergence rate O(N−(m+1)es ) was demonstrated numerically by performing

numerical integration of GENZ functions. For moderate-dimensional discrete ran-

dom inputs, we have demonstrated that non-nested sparse grids based on the Gauss

quadrature rule can also be effective. In lower dimensions, PCM on sparse grids is

less efficient than on tensor product grids in integration of GENZ functions, how-

ever in higher dimensions, sparse grids are more efficient than tensor product grids.

In particular, it appears that the convergence behavior is not sensitive to hybrid

(discrete/continuous) random inputs.

We have also considered the numerical solution of the reaction equation and the

KdV equation subject to stochastic excitation. For the one-dimensional discrete

random inputs, we have demonstrated the h- and p-convergence of ME-PCM. In

particular, an adaptive procedure was established using the local variance criterion.

In this work, we focus on the convergence behavior of ME-PCM for arbitrary

discrete measures by performing numerical experiments on given random variables.

Page 81: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

Chapter Four

Adaptive Wick-Malliavin (WM)

approximation to nonlinear SPDEs

with discrete RVs

Page 82: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

59

We propose an adaptive Wick-Malliavin (WM) expansion in terms of the Malliavin

derivative of order Q to simplify the propagator of general Polynomial Chaos (gPC)

of order P (a system of deterministic equations for the coefficients of gPC) and to

control the error growth with respect to time. Specifically, we demonstrate the effec-

tiveness of the WM method by solving a stochastic reaction equation and a Burgers

equation with several discrete random variables (RVs). Exponential convergence is

shown numerically with respect to Q when Q ≥ P − 1. We also analyze the compu-

tational complexity of WM method and identify a significant speed-up with respect

to gPC, especially in high dimensions.

4.1 Notation

Γ probability measure of discrete RVsξ discrete RV

ci(x, λ) Charlier polynomials corresponding to Pois(λ) distributionsδij Dirac delta functionλ mean of Poisson distributionDp Malliavin derivative of order pp Wick product of order pQ Wick-Malliavin orderP order of polynomials in general Polynomial Chaos (gPC)d number of RVs in the input stochastic process in the SPDE

4.2 WM approximation

WM propagator simplifies the gPC propagator by considering less number of product

terms from the polynomial nonlinearity. In this section, we present this simplifica-

tion procedure and derive WM propagators for a stochastic reaction equation and a

stochastic Burgers equation. The following procedure can be done for any discrete

Page 83: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

60

stochastic input with finite moments of all orders. To demonstrate the approximation

procedure, we take Poisson RV as an example.

4.2.1 WM series expansion

Given a discrete Poisson RV ξ ∼ Pois(λ) with measure Γ(x) =∑

k∈Se−λλk

k!δ(x− k),

on a finite support S = 0, 1, 2, ..., N,1 there is an associated unique set of monic

orthogonal polynomials [59], called Charlier polynomials, denoted as ck(x;λ), k =

0, 1, 2, ..., such that:

∑k∈S

e−λλk

k!cm(k;λ)cn(k;λ) =

n!λnδmn if m = n

0 if m 6= n. (4.1)

The monic Charlier polynomials associated with Pois(λ) are defined as:

cn(x;λ) =n∑k=0

n

k

(−λ)n−kx(x− 1)...(x− (k − 1)) , n = 0, 1, 2, ... (4.2)

Here

n

k

is a binomial coefficient. The first few Charlier polynomials are

c0(x;λ) = 1 (4.3)

c1(x;λ) = x− λ (4.4)

c2(x;λ) = x2 − 2λx− x+ λ2 (4.5)

c3(x;λ) = x3 − 3λx2 − 3x2 + 3λ2x+ 3λx+ 2x− λ3 . (4.6)

1For numerical computation, here we consider the support S to be from 0 to N instead of 0 to∞, such that P(ξ = N) ≤ 1e− 32.

Page 84: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

61

Since ck(x;λ), k = 0, 1, 2, ... belongs to the Askey-scheme of hypergeometric poly-

nomials [6], the product of any two polynomials can be expanded as [5]

cm(x)cn(x) =m+n∑k=0

a(k,m, n)ck(x), m, n = 0, 1, 2, ... (4.7)

where a(k,m, n) can be evaluated both analytically 2 or numerically [24, 50, 59,

139, 141]. Numerically we may generate a(k,m, n) by

a(k,m, n) =

∑j∈S

e−λλj

j!ck(j;λ)cm(j;λ)cn(j;λ)∑

j∈Se−λλj

j!ck(j;λ)ck(j;λ)

, k = 0, 1, 2, ...,m+ n. (4.8)

Analytically a(k,m, n) is given by [95]

a(k,m, n) =

∑x(m+n−k)/2yl=0

m!n!k!λl+k

l!(k−m+l)!(k−n+l)!(m+n−k−2l)!

k!λk, k = 0, 1, ...,m+ n. (4.9)

Here xxy is the floor function.

The alternative analytical method to generate a(k,m, n) in equation (4.8) is given

in the Appendix.

For convenience, let us denote a(m + n − 2p,m, n) by Kmnp as follows (for ξ ∼

Pois(λ)),

Kmnp =

∑xpyl=0

m!n!(m+n−2p)!λl+m+n−2p

l!(n−2p+l)!(m−2p+l)!(2p−2l)!

(m+ n− 2p)!λm+n−2p, p = 0, 1/2, ...,

m+ n

2. (4.10)

2For monic polynomials ci(x), i = 0, 1, 2, ..., we can derive a(m + n,m, n) to a(0,m, n) itera-tively by matching the coefficient of xm+n to x0 for the left- and right-hand-sides of equation (4.7), asan alternative method to derive a(k,m, n) than in equation (4.8). We notice that a(m+n,m, n) = 1.

Page 85: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

62

Then (4.7) can be rewritten as

cm(x;λ)cn(x;λ) =

m+n2∑

p=0

Kmnpcm+n−2p(x;λ), (4.11)

where p takes half integer values as p = 0, 1/2, 1, ..., m+n2

. Equation (4.11) is com-

pletely equivalent to equation (4.7).

Now let us define the Wick product ‘’ as [42, 83, 102, 110, 183]

cm(x;λ) cn(x;λ) = cm+n(x;λ), m, n = 0, 1, 2, ... (4.12)

and define the Malliavin derivative ‘Dp’ as 3 [110, 132]

Dpci(x;λ) =i!

(i− p)!ci−p(x;λ), i = 0, 1, 2, ..., p = 0, 1/2, 1, ..., i. (4.13)

We define ‘Dp1,...,pd ’ as the product of operators from ‘Dp1 ’ to ‘Dpd ’:

Dp1,...,pdci1(x;λ)...cid(x;λ) = Πdj=1

ij!

(ij − pj)!cij−pj(x;λ), ij = 0, 1, 2, ..., pj = 0, 1/2, 1, ..., ij.

(4.14)

We define the weighted Wick product ‘p’ in terms of the Wick product as

cm p cn =p!m!n!

(m+ p)!(n+ p)!Km+p,n+p,pcm cn, (4.15)

3In this definition p has to take half integer values in order to balance equation (4.17) withequation (4.11). Although here in the definition of Malliavin derivative ci−p may take integervalues, the Malliavin derivative will always appear with the weighted Wick product, therefore aftertaking the Malliavin derivative and Wick product the resulting polynomial will always be an integer.

Page 86: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

63

and define ‘p1,...,pd ’ as

(cm1 ...cmd) p1,...,pd (cn1 ...cnd) = Πdj=1

pj!mj!nj!

(mj + pj)!(nj + pj)!Kmj+pj ,nj+pj ,pjcmj cnj .

(4.16)

Therefore, (4.11) can be rewritten as

cm(x;λ)cn(x;λ) =

m+n2∑

p=0

Dpcm p Dpcnp!

. (4.17)

We note that the definition of weighted Wick product (4.15) depends on the measure

Γ. Assume that we are given two random fields u and v on the same probability

space (S,B(S),Γ), with their expansions u =∑∞

i=0 uici and v =∑∞

i=0 vici. Then, we

can expand uv by

uv =∞∑p=0

Dpu p Dpvp!

, (4.18)

(index p takes half integer values), if we define

Dpu =∞∑i=0

uiDpci. (4.19)

Now let us introduce a non-negative half integer Q ∈ 0, 1/2, 1, ... as the Wick-

Malliavin order 4, hence (4.18) can be approximated by the following Wick-Malliavin

expansion

uv ≈Q∑p=0

Dpu p Dpvp!

, Q = 0, 1/2, 1, ..., (4.20)

and p here also takes half integer values.

Now let us assume η to be an RV with discrete measure of finite moments of all

orders on a complete probability space (S,B(S),Γ) . There is an associated unique

4As the upper limit of index p in equation (4.20), Q takes half integer values, in the same wayas p in equation (4.17) takes half integer values from 0 to m+n

2 .

Page 87: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

64

set of monic orthogonal polynomials with respect to this measure [59], denoted as

Pi(η), i = 0, 1, 2, ... for η ∈ S, such that

∫S

Pm(η)Pn(η)dΓ(η)

> 0 if m = n

= 0 if m 6= n. (4.21)

Following the same procedure from equation (4.7) to equation (4.17), we can expand

the product of u′ =∑∞

i=0 u′iPi and v′ =

∑∞i=0 v

′iPi as

u′v′ ≈Q∑p=0

Dpu′ p Dpv′

p!, Q = 0, 1/2, 1, .... (4.22)

4.2.2 WM propagators

In this section, we will study a stochastic reaction equation and a stochastic Burgers

equation, and derive their Wick-Malliavin propagators.

Reaction equation

Let us consider the following reaction equation with a random coefficient:

dy

dt= −σk(t, ξ1, ξ2, ..., ξd)y(t;ω), y(0;ω) = y0, (4.23)

where ξ1, ..., ξd ∼ Pois(λ) are independent identically distributed (i.i.d.), and

k(t, ξ1, ..., ξd) =∑∞

i1,...,id=0 ai1,...,id(t)ci1(ξ1;λ)...cid(ξd;λ) 5; σ controls the variance of

reaction coefficient. Also ck(ξ;λ), k = 0, 1, 2, ... are monic Charlier polynomials

5Such k(t, ξ1, ..., ξd) is meaningful to be considered because many stochastic processes haveseries representations, e.g. Karhunen Loeve expansion for Gaussian process [92, 107], and shotnoise expansion for Levy pure jump processes [25, 153, 154].

Page 88: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

65

associated with the Poisson distribution and with mean λ [46, 89, 152, 169].

Remark: Here we present the WM approximation method for the Poisson distribu-

tion; however, the method is not restricted to Poisson distribution, since we can gen-

erate orthogonal polynomials with respect to other discrete measures [24, 50, 59, 139],

at least for the Wiener-Askey family of polynomials [5, 6].

By (4.20), the WM approximation to (4.23) is

dy

dt≈ −σ

Q1,...,Qd∑p1,...,pd=0

Dp1,...,pdk(t, ξ1, ..., ξd) p1,...,pd Dp1,...,pdyp1!...pd!

, y(0;ω) = y0. (4.24)

Here Q1, ..., Qd are Wick-Malliavin orders for RVs ξ1, ..., ξd respectively. We expand

the solution to (4.23) in a finite dimensional series as

y(t;ω) =

P1,...,Pd∑j1,j2,...,jd=0

yj1,...,jd(t)cj1(ξ1)...cjd(ξd), (4.25)

where P1, ..., Pd are polynomial chaos expansion order for RVs ξ1, ..., ξd, respectively.

By substituting (4.25) into (4.24) and Galerkin projection onto ci1(ξ1)...cid(ξd)

< f(ξ1, ..., ξd)ci1(ξ1)...cid(ξd) >=

∫S1

dΓ1(ξ1)...

∫Sd

dΓ1(ξd)fci1(ξ1)...cid(ξd), (4.26)

(Si and Γi are the support and the measure of ξi) we obtain the Wick-Malliavin

propagator for problem (4.23) as

dyi1...id(t)

dt= −σ

P1,...,Pd∑l1,...,ld=0

Q1,...,Qd∑m1,...,md=0

(Kl1,2m1+i1−l1,m1 ...Kld,2md+id−ld,md

al1...ld(t)y2m1+i1−l1,...,2md+id−ld

), yi1...id(0) = y0δi1,0δi2,0...δid,0,

(4.27)

for i1 = 0, 1, ..., P1, ..., id = 0, 1, ..., Pd.

Page 89: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

66

Burgers equation

Let us now consider the following Burgers equation with a random forcing term:

ut + uux = νuxx + σ

d∑j=1

c1(ξj)ψj(x, t), x ∈ [−π, π], (4.28)

with initial condition

u(x, 0) = 1− sin(x) (4.29)

and periodic boundary conditions. Here ξ1,...,d ∼ Pois(λ) are i.i.d. RVs, and σ is a

constant that controls the magnitude of the force. The WM approximation of (4.28)

is

ut +

Q1,...,Qd∑p1,...,pd=0

1

p1!...pd!Dp1...pdu p Dp1...pdux ≈ νuxx + σ

d∑j=1

c1(ξj)ψj(x, t), (4.30)

If we expand the solution in a finite dimensional series as

u(x, t; ξ1, ..., ξd) =

P1,...,Pd∑k1,...,kd=0

uk1,...,kd(x, t)ck1(ξ1;λ)...ckd(ξd;λ), (4.31)

then by substituting (4.31) into (4.30) and performing a Galerkin projection onto

ck1(ξ1)...ckd(ξd), we derive the WM propagator for problem (4.28) as

∂tuk1...kd(x, t) +

Q1,...,Qd∑p1,...,pd=0

P1,...,Pd∑m1,...,md=0

(Km1,k1+2p1−m1,p1 ...Kmd,kd+2pd−md,pd

um1...md

∂xuk1+2p1−m1,...,kd+2pd−md

)= ν

∂2

∂x2uk1...kd + σ(δ1,k1δ0,k2 ...δ0,kdψ1 + ...+ δ0,k1δ0,k2 ...δ1,kdψd),

(4.32)

for k1, ..., kd = 0, 1, ..., P , with the restriction 0 ≤ ki + 2pi −mi ≤ P , for i = 1, ..., d.

Page 90: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

67

The initial conditions (I.C.) and boundary conditions (B.C.) are given by

u0,0,...,0(x, 0) = u(x, 0) = 1− sin(x), (I.C.),

uk1,...,kd(x, 0) = 0, if (k1, ..., kd) 6= (0, ..., 0), (I.C.),

uk1...kd(−π, t) = uk1...kd(π, t), (periodic B.C. on[−π, π]).

(4.33)

4.3 Moment statistics by WM approximation of

stochastic reaction equations

In this section, we will provide numerical results for solving the reaction and Burgers

equations with different discrete random inputs by the WM method. We will compare

the computational complexity of WM and gPC for Burgers equation with multiple

RVs.

4.3.1 Reaction equation with one RV

In figure 4.1, we show results from computing the WM propagator given in equation

(4.27) for the reaction equation with one Poisson RV (d = 1 in equation (4.23)).

We plot the errors of second moments at final time T s, with respect to different

WM expansion order Q6. The polynomial expansion order P in (4.25) was chosen

sufficiently large in order to mainly examine the convergence with respect to Q. We

used the fourth-order Runge Kutta method (RK4) to solve (4.27) with sufficiently

6In figure 4.1 we show errors for Q taking integer values because the error line for Q = k isalmost the same as k + 1

2 . We observe similar behavior in figure 4.6.

Page 91: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

68

0.5 1 1.5 2 2.5 3 3.5 4 4.5 510 7

10 6

10 5

10 4

10 3

10 2

T

l2er

r(T)

Q=0

Q=1

Q=2

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

10 12

10 10

10 8

10 6

10 4

10 2

100

T

l2er

r(T)

Q=0

Q=1

Q=2

Figure 4.1: Reaction equation with one Poisson RV ξ ∼ Pois(λ) (d = 1): errors versus final timeT defined in (4.34) for different WM order Q in equation (4.27), with polynomial order P = 10,

y0 = 1, λ = 0.5. We used RK4 scheme with time step dt = 1e− 4; k(ξ) = c0(ξ;λ)2! + c1(ξ;λ)

3! + c2(ξ;λ)4! ,

σ = 0.1(left); k(ξ) = c0(ξ;λ)0! + c1(ξ;λ)

3! + c2(ξ;λ)6! , σ = 1 (right).

small time steps. The error of the second moment at final time T is defined as:

l2err(T ) = |E[y2ex(T ;ω)]− E[y2

num(T ;ω)]

E[y2ex(T ;ω)]

|. (4.34)

From figure 4.1 with a fixed polynomial order P , we take k(ξ) = a0(t)c0(ξ;λ) +

a1(t)c1(ξ;λ) + a2(t)c2(ξ;λ), therefore the WM order Q = 2 is the highest order

that equates equation (4.23) with (4.24), in equation (4.20) (when Q ≥ 2, the WM

propagator is exactly the same with the gPC propagator). We observe that in figure

4.1, when Q increases by one, the error is improved by at least one order of magnitude

when σ = 0.1, and four orders of magnitude when σ = 1. Therefore, with less

computational cost than gPC, WM method can achieve the same accuracy as gPC.

In gPC, the polynomial order P serves as a resolution parameter for the stochastic

system. In WM method, for each P we may further refine the system by another

resolution parameter Q. We observe that the right plot in figure 4.1 has a dip for

error lines corresponding to Q = 1 and 2. When σ is larger, the solution of equation

(4.23) decays faster, and hence this trend in the error; however with polynomial

order P we ignore the terms in the sum (4.25) with polynomial order larger than P ,

Page 92: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

69

Table 4.1: For gPC with different orders P and WM with a fixed order of P = 3, Q = 2 in reaction

equation (4.23) with one Poisson RV (λ = 0.5, y0 = 1, k(ξ) = c0(ξ;λ)2! + c1(ξ;λ)

3! + c2(ξ;λ)4! , σ = 0.1, RK4

scheme with time step dt = 1e − 4), we compare: (1) computational complexity ratio to evaluatek(t, ξ)y(t;ω) between gPC and WM (upper); (2) CPU time ratio to compute k(t, ξ)y(t;ω) betweengPC and WM (lower).We simulated in Matlab on Intel (R) Core (TM) i5-3470 CPU @ 3.20 GHz.

gPC order P P = 4 P = 6 P = 8 P = 10Ratio of complexity (gPC/WM) 1.4054 2.2162 3.027 3.8378Ratio of CPU time (gPC/WM) 1.2679 1.8036 2.3393 2.875

which increases the error with respect to time. Because of this balance of decreasing

and increasing errors, we observe that errors go down at first and then up in the

right plot in figure 4.1. On the left plot of figure 4.1 we do not observe that the error

goes down and up because σ is small and the solution decays slower so the error

mainly increases with time. We can evaluate the coefficients Kmnp in equation (4.10)

offline, and we only compute the WM propagator in equation (4.24) online. We

consider the number of terms to evaluate k(t, ξ)y(t;ω) in equation (4.23) in the WM

propagator (4.24) as the primary contribution to the computational complexity. We

consider the online CPU time in table 4.1 as the CPU time to evaluate the right hand

side of equation (4.24) excluding the time to compute coefficients Kmnp in equation

(4.10). In table 4.1 we compare the complexity and corresponding computational

time between gPC of different orders P and WM with a fixed order of P = 3, Q = 2

for the reaction equation (4.23) with one RV (with the same parameters as on the

left of figure 4.1). Notice that the l2err from WM with P = 3, Q = 2 is 1.5e− 8 and

the l2err from gPC with P = 10 is 1.4e − 8 (almost the same), however the online

CPU time for gPC is 2.875 times greater than that of WM.

Page 93: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

70

4.3.2 Reaction equation with multiple RVs

Now let us compute (4.27) with five i.i.d. Poisson RVs with mean λ (d = 5). We solve

problem (4.23) assuming a new model, where k(ξ1, ξ2, ..., ξ5, t) =∑5

i=1 cos(it)c1(ξi).

The WM propagator in this problem was solved by the RK2 scheme. For a fixed

polynomial expansion order P in figure 4.2, we plot the error defined in (4.34) with

respect to time and for different WM order Q.

0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.510 6

10 5

10 4

10 3

10 2

10 1

100

T

l2er

r(T)

Q=0Q=1

Figure 4.2: Reaction equation with five Poisson RVs ξ1,...,5 ∼Pois(λ) (d = 5): error definedin (4.34) with respect to time, for different WM order Q, with parameters: λ = 1, σ = 0.5,y0 = 1, polynomial order P = 4, RK2 scheme with time step dt = 1e − 3, and k(ξ1, ξ2, ..., ξ5, t) =∑5i=1 cos(it)c1(ξi) in equation (4.23).

We observe in figure 4.2 that by adding only one more Malliavin derivative order

Q, the error is improved by two orders of magnitude at T = 0.5. When Q = 1,

the WM propagator has a much simpler form than the gPC propagator. Figure 4.2

also demonstrates the ability of computing SPDEs with multiple RVs by the WM

method. Notice that Levy processes have different types of series expansions by

independent RVs, therefore figure 4.2 represents the first step towards dealing with

nonlinear SPDEs with Levy processes (including Gaussian processes and pure jump

processes that admit series representations).

Page 94: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

71

Next let us compute (4.27) with one Poisson RV (ξ1) with mean λ and one Bino-

mial RV (ξ2) with the number of trials N and success probability p. We solve problem

(4.23) assuming k(ξ1, ξ2) = c1(ξ1)k1(ξ2), where k1(ξ2) is the orthogonal polynomial

to the Binomial distribution for ξ2. We derive the coefficients Kmnp in equation

(4.11) both for the Poisson distribution and the Binomial distribution. The WM

propagator in this case is still given by equation (4.27) with d = 2, except replacing

the corresponding Kmnp for ξ2 by those generated from the Binomial distribution.

For a fixed polynomial order P in figure 4.3, we plot the error defined in (4.34) with

respect to time and for different WM order Q.

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 210 15

10 10

10 5

100

T

l2er

r(T)

Q=0Q=1

Figure 4.3: Reaction equation with one Poisson RV ξ1 ∼Pois(λ) and one Binomial RV ξ2 ∼Bino(N, p) (d = 2): error defined in (4.34) with respect to time, for different WM order Q, withparameters: λ = 1, σ = 0.1, N = 10, p = 1/2, y0 = 1, polynomial order P = 10, RK4 scheme withtime step dt = 1e− 4, and k(ξ1, ξ2, t) = c1(ξ1)k1(ξ2) in equation (4.23).

We observe in figure 4.3 that by adding one more Malliavin derivative order Q,

the error is improved by ten orders of magnitude at T = 1. Figure 4.3 also demon-

strates the ability of computing SPDEs with multiple RVs with different distributions

(hybrid type).

Page 95: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

72

4.4 Moment statistics by WM approximation of

stochastic Burgers equations

Now let us compute the WM propagator for a Burgers equation with one Poisson

RV in equation (4.32). We solved the WM propagator by a second-order implicit-

explicit (IMEX) time splitting scheme 7. For spatial discretization we used the

Fourier collocation method. The reference solution was established by running the

Burgers equation with ξ taking all the possible values 8. In this problem we define

the L2 norm of error for second moments as follows, for a certain final time T :

l2u2(T ) =||E[u2

num(x, T ; ξ)]− E[u2ex(x, T ; ξ)]||L2([−π,π])

||E[u2ex(x, T ; ξ)]||L2([−π,π])

. (4.35)

4.4.1 Burgers equation with one RV

In figure 4.4, we observe monotonic convergence with respect to Q, that is by in-

creasing the WM order Q by one, the error decreases effectively by five to six orders

of magnitude at T = 1. If we use gPC in this problem, we will calculate (P + 1)3

terms in∑P

i,j=0 ui(x)∂uj∂x

for (P + 1) equations in the gPC propagator (343 terms in

this problem). However by the WM method, in order to have good accuracy, say

1e − 12, as shown in figure 4.4, we consider much fewer terms resulted from the

nonlinear term u∂u∂x

in the Burgers equation by only taking Q = 3 (231 terms in this

problem).

7We used the second-order RK2 scheme for nonlinear terms and the forcing term, and Crank-Nicolson scheme for the diffusion term.

8Although the Poisson RV has infinite number of points in the support, we only consider thepoints with probability more than 1e− 16.

Page 96: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

73

0 0.5 1 1.5 2 2.5 310 15

10 10

10 5

100

T

l2u2

(T)

Q=0Q=1Q=3

Figure 4.4: Burgers equation with one Poisson RV ξ ∼Pois(λ) (d = 1, ψ1(x, t) = 1): l2u2(T )error defined in (6.62) versus time, with respect to different WM order Q. Here we take in equation(4.32): polynomial expansion order P = 6, λ = 1, ν = 1/2, σ = 0.1, IMEX (Crank-Nicolson/RK2)scheme with time step dt = 2e− 4, and 100 Fourier collocation points on [−π, π].

1 1.5 2 2.5 3 3.5 410 14

10 12

10 10

10 8

10 6

10 4

10 2

P

err o

f sec

ond

mom

ents

PCMQ=0Q=1Q=2Q=3Q=4

1 1.5 2 2.5 3 3.5 410 10

10 9

10 8

10 7

10 6

10 5

10 4

10 3

10 2

P

err o

f 2nd

mom

ents

PCMQ=0Q=1Q=2Q=3Q=4

Figure 4.5: P-convergence for Burgers equation with one Poisson RV ξ ∼Pois(λ) (d = 1, ψ1(x, t) =1): errors defined in equation (6.62) versus polynomial expansion order P , for different WM orderQ, and by probabilistic collocation method (PCM) with P+1 points with the following parameters:ν = 1, λ = 1, final time T = 0.5, IMEX (Crank-Nicolson/RK2) scheme with time step dt = 5e− 4,100 Fourier collocation points on [−π, π], σ = 0.5 (left), and σ = 1 (right).

Page 97: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

74

0 0.5 1 1.5 2 2.5 3 3.5 410 14

10 12

10 10

10 8

10 6

10 4

10 2

Q

err o

f 2nd

mom

ents

P=1P=2P=3P=4

0 0.5 1 1.5 2 2.5 3 3.5 410 10

10 9

10 8

10 7

10 6

10 5

10 4

10 3

10 2

Q

err o

f 2nd

mom

ents

P=1P=2P=3P=4

Figure 4.6: Q-convergence for Burgers equation with one Poisson RV ξ ∼Pois(λ) (d = 1, ψ1(x, t) =1): errors defined in equation (6.62) versus WM order Q, for different polynomial order P , with thefollowing parameters: ν = 1, λ = 1, final time T = 0.5, IMEX(RK2/Crank-Nicolson) scheme withtime step dt = 5e − 4, 100 Fourier collocation points on [−π, π], σ = 0.5 (left), and σ = 1 (right).The dashed lines serve as a reference of the convergence rate.

In figure 4.5, we plot the error defined in equation (6.62) with respect to polyno-

mial expansion order P , for different WM order Q. We also compare it with the error

by the probabilistic collocation method (PCM) with (P + 1) points9. We observe

that for a fixed polynomial order P in gPC, the smallest Q to match the error from

the WM propagator to the same order with PCM is when Q = P − 1. For example,

in figure 4.5 when P = 2, the first error line by WM that touches the black solid line

by PCM is the one that corresponds to Q = 1. Although this observation is only

empirical, it allows us to compare the computational complexity between gPC and

WM with the same level of accuracy, i.e. we are going to compare the computational

cost later between gPC of polynomial order P and WM of polynomial order P and

of WM order Q = P − 1 . We also observe from figure 4.5 the smallest value of Q

we need to model the stochastic Burgers equation with one discrete RV for a specific

polynomial order P , to achieve the same accuracy with gPC of polynomial order P .

When Q ≥ P − 1, we see from figure 4.5 that even if we increase P the convergence

rate versus P will be slower than P-convergence from gPC.

9gPC with polynomial order P has the same magnitude of error with PCM implemented with(P + 1) quadrature points, therefore by plotting PCM with (P + 1) quadrature points against WMwith polynomial order P , we are comparing the gPC with WM at the same polynomial order P .

Page 98: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

75

In figure 4.6, we investigate the Q-convergence of WM approximation by plot-

ting the error defined in equation (6.62) with respect to WM order Q, for different

polynomial expansion orders P . The first observation is that when Q increases from

integer k to the next larger half integer k+ 12, the error is not prominently improved,

but the error is obviously improved when Q increases from integer k to integer k+1.

This is very similar to a phenomenon in spectral method that the magnitude of error

oscillates between even orders and odd orders. The second observation is that the

choice of Q = P − 1 is optimum for the WM approximation, because in figure 4.6

the error remains at the same magnitude when Q is taking values larger than P − 1.

For example, we note the error line for the left figure in figure 4.6 with respect to

P = 2: the error decreases when Q is smaller than P − 1 = 1, however when Q

takes values such as 2 or 3, the error remains at the same magnitude. This is an

important observation that allows us to save computational time when simulating

nonlinear SPDEs i.e., we may use smaller values of P for a certain Q and obtain the

maximum possible accuracy.

From figures 4.5 and 4.6, we conclude that in order to model a stochastic Burgers

equation with one discrete RV, to achieve the same P-convergence rate with gPC,

we may take Q = P − 1 in WM method, with much less computational cost than

gPC.

4.4.2 Burgers equation with multiple RVs

Now let us compute (4.28) with three Poisson RVs with mean λ (d = 3). We

solve problem (4.28) with the random forcing term to be σ∑d

j=1 c1(ξj)ψj(x, t) =

σ∑3

j=1 c1(ξj)cos(0.1jt). We solved the WM propagator (4.32) by the second-order

IMEX time splitting scheme (RK2/Crank-Nicolson). For a fixed polynomial expan-

Page 99: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

76

sion order P in figure 4.2, we plotted the error defined in (6.62) with respect to time,

for different WM order Q. Here we take P1 = P2 = P3 = P = 2.

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 110 7

10 6

10 5

10 4

10 3

10 2

T

l2er

r(T)

Q1=Q2=Q3=0

Q1=1,Q2=Q3=0

Q1=Q2=1,Q3=0

Q1=Q2=Q3=1

Figure 4.7: Burgers equation with three Poisson RVs ξ1,2,3 ∼Pois(λ) (d = 3): error defined inequation (6.62) with respect to time, for different WM order Q, with parameters: λ = 0.1, σ = 0.1,y0 = 1, ν = 1/100, polynomial order P = 2, IMEX (RK2/Crank-Nicolson) scheme with time stepdt = 2.5e− 4.

We observe in figure 4.7 that the error is not prominently decreased when we

increase WM order Q for one or two RVs, but the error is greatly decreased when we

increase Q for all three RVs. In this numerical experiment we have also computed the

case that Q1 = Q2 = Q3 = 12, and similar to figure 4.6, the error line corresponding

to that almost overlapped with the error line for Q1 = Q2 = Q3 = 0 in figure

4.7. This suggests that when we model stochastic Burgers equations with multiple

discrete RVs, the accuracy in some cases will not be greatly improved by increasing

the WM order Q by 12. Therefore, in oder to save computational cost in WM method

for Burgers equations with multiple discrete RVs, we may use integer values for Q

for each RV instead of half integer values.

Page 100: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

77

0.5 1 1.5 2 2.510 13

10 12

10 11

10 10

10 9

10 8

10 7

10 6

T

l2er

r(T)

non adaptiveadaptive

P=6

P=6

P=6

P=6

P=6

P=8

P=8

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.810 12

10 10

10 8

10 6

10 4

10 2

T

l2er

r(T)

non adaptiveadaptiveP=6

P=6

P=6

P=6

P=6P=6

P=6P=6

P=8

P=8

P=10

P=10

P=12

Figure 4.8: Reaction equation with P-adaptivity and two Poisson RVs ξ1,2 ∼Pois(λ) (d = 2):error defined in (4.34) with two Poisson RVs by computing the WM propagator in equation (4.27)with respect to time by the RK2 method with: fixed WM order Q = 1, y0 = 1, ξ1,2 ∼ Pois(1),a(ξ1, ξ2, t) = c1(ξ1;λ)c1(ξ2;λ), for fixed polynomial order P (dashed lines), for varied polynomialorder P (solid lines), for σ = 0.1 (left), and σ = 1 (right). Adaptive criterion values are: l2err(t) ≤1e− 8(left), and l2err(t) ≤ 1e− 6(right).

4.5 Adaptive WM method

Now let us control the error growth with respect to time under a certain pre-specified

accuracy. We will show that it is possible to control the error below a certain

threshold by increasing the gPC order P and the WM order Q (P −Q refinement).

Under a pre-specified adaptive criterion value, we increase the polynomial order P

or the WM order Q, when the absolute value of error is greater than the adaptive

criterion value (P -adaptivity and Q-adaptivity).

In figure 4.8, we address the long term integration issue of gPC by computing

the WM propagator in reaction equation (4.27) with two Poisson RVs with mean λ,

for a fixed Q. We plot the error defined in equation (4.34) with respect to time and

we adaptively increase P to keep the error under the indicated adaptive criterion.

We observe that increasing the polynomial order P is an effective way to control the

error when time progresses for SPDEs with multiple RVs. Besides dealing with the

long term integration problem, varying P also allows us to use a smaller polynomial

order P at early times, hence expending less computational cost. In gPC, we may

Page 101: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

78

0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.510 13

10 12

10 11

10 10

10 9

10 8

10 7

10 6

10 5

T

l2u2

(T)

non Q adaptiveQ adaptive

Q=1

Q=1

Q=1

Q=1

Q=2

Q=2Q=3 Q=4

Q=1

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 110 13

10 12

10 11

10 10

10 9

10 8

10 7

10 6

10 5

T

l2u2

(T)

non P adaptiveP adaptive

P=1

P=1

P=1

P=1P=1

P=2

P=2

P=3

P=3

Figure 4.9: Burgers equation with P -Q-adaptivity and one Poisson RV ξ ∼Pois(λ) (d = 1,ψ1(x, t) = 1): error defined in equation (6.62) by computing the WM propagator in equation(4.32) with IMEX (RK2/Crank-Nicolson) method (λ = 1, ν = 1/2, time step dt = 2e − 4). Fixedpolynomial order P = 6, σ = 1, and Q is varied (left); fixed WM order Q = 3, σ = 0.1, and P isvaried (right). Adaptive criterion value is: l2u2(T ) ≤ 1e− 10 (left and right).

also keep the error lower than a value by increasing P , however increasing P in the

gPC propagator costs much more than increasing P in the WM propagator with a

small Q.

In figure 4.9 we compute the WM propagator in Burgers equation (4.32) with

one Poisson RV, with mean λ = 1. We plot the error defined in (6.62) with respect

to time both in the case that we fix Q or P to control the error to be under the

indicated adaptive criterion by increasing P or Q. We observe that increasing the

WM expansion order Q is also an effective way to control the error when time

progresses.

4.6 Computational complexity

We demonstrate next that the WM propagator is more cost-effective in evaluating

the statistics of solution than the gPC propagator. Because the computational com-

plexity depends on the form of equation itself, we analyze this case by case. First let

Page 102: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

79

us consider the Burgers equation as an example to compare the complexity of WM

and gPC.

4.6.1 Burgers equation with one RV

To compare WM and gPC methods for Burgers equation with one RV (ξ ∼ Pois(λ)),

we simply write the gPC and WM propagators separately and compare how they

differ from each other. We consider this equation:

ut + uux = νuxx + σc1(ξ;λ), x ∈ [−π, π]. (4.36)

The gPC propagator for this problem is:

∂uk∂t

+P∑

m,n=0

um∂un∂x

< cmcnck >= ν∂2uk∂x2

+ σδ1k, k = 0, 1, ..., P. (4.37)

where < cmcnck >=∫SdΓ(ξ)ck(ξ)cm(ξ)cn(ξ).

The WM propagator for this problem is:

∂uk∂t

+

Q∑p=0

P∑i=0

ui∂uk+2p−i

∂xKi,k+2p−i,Q = ν

∂2uk∂x2

+ σδ1k, k = 0, 1, ..., P. (4.38)

The only difference between gPC and WM propagators is between the term∑Pm,n=0 um

∂un∂x

< cmcnck > in gPC and the term∑Q

p=0

∑Pi=0 ui

∂uk+2p−i∂x

Ki,k+2p−i,p in

WM. Assuming that we are going to solve equations (4.37) and (4.38) with the same

time stepping scheme and the same spatial discretization, for each time step, let us

also assume that the computational complexity of computing one term like ui∂uj∂x

is

Page 103: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

80

α, while the complexity for the rest of the linear terms is 1.

Under this assumption, in equation (4.37) for gPC, we have (P + 1) equations

in the system, each one with complexity 1 + (P + 1)2α, and therefore the total

complexity is (P + 1)[1 + (P + 1)2α].

In equation (4.38) for WM, we still have (P + 1) equations in the system. By

denoting the number of terms like ui∂uj∂x

in the whole WM propagator as C(P,Q), the

total complexity will be (P + 1) + C(P,Q)α, and we compute C(P,Q) numerically.

We demonstrate how to count C(P,Q), when P = 4, Q = 1/2 in figure 4.10: there

are five 4× 4 grids, for k = 0, 1, 2, 3, 4 respectively for ui∂uj∂x

in all the five equations

in the WM propagator. The horizontal axis represents the index i for ui and the

vertical axis represents the index j for∂uj∂x

. We marked the terms like ui∂uj∂x

in

the k-th equation in the WM propagator by drawing a circle at the (i, j) dot on

the k-th grid. In this way we may visualize the nonlinear terms in the propagator

and hence visualize the main computational complexity. In the WM method for

P = 4, Q = 1/2, only the circled dots are considered in the propagator, however

in the gPC method for P = 4, all the dots on the five grids are considered in the

propagator. Hence, we can see how many fewer terms like ui∂uj∂x

we need to consider

in WM comparing to gPC.

When P is sufficiently large, the ratio for complexity of WM to gPC is approx-

imately C(P,Q)(P+1)3

, ignoring lower order terms on P . Since we observed in Figures 4.5

and 4.6 that when Q = P −1, the errors computed from WM propagators are at the

same accuracy level as from gPC propagators, we calculate the ratio of complexity

between WM and gPC for Burgers equation with one RV C(P,Q=P−1)(P+1)3

when Q = P−1

Page 104: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

81

Figure 4.10: Terms in∑Qp=0

∑Pi=0 ui

∂uk+2p−i∂x Ki,k+2p−i,p for each PDE in the WM propagator

for Burgers equation with one RV in equation (4.38) are denoted by dots on the grids: hereP = 4, Q = 1

2 , k = 0, 1, 2, 3, 4. Each grid represents a PDE in the WM propagator, labeled by

k. Each dot represents a term in the sum∑Qp=0

∑Pi=0 ui

∂uk+2p−i∂x Ki,k+2p−i,p . The small index next

to the dot is for p, x direction is the index i for ui, and y direction is the index k+2p− i in∂uk+2p−i

∂x. The dots on the same diagonal line have the same index p.

(so that WM and gPC have the same level of accuracy) and P ≥ 2 as

C(P,Q = P − 1)

(P + 1)3= 1−

10 + 16P (P + 1)(P + 2)

(P + 1)3. (4.39)

Page 105: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

82

4.6.2 Burgers equation with d RVs

Now suppose we are going to solve the Burgers equation with d RVs (each RV ξi has

polynomial expansion order Pi and WM order Qi):

ut + uux = νuxx + σcm1(ξ1)...cmd(ξd), x ∈ [−π, π]. (4.40)

By gPC, we will have Πdi=1(Pi + 1) equations in the propagator, and if Pi are all

equal to P , there will be (P + 1)d equations in the gPC propagator. We will have

(Πdi=1(Pi + 1))(Πd

i=1(Pi + 1)2) terms like uk∂uj∂x

. When all the RVs are having the

same P, this number is (P + 1)3d.

By WM, we will still have the same number of equations in the propagator

system, but the number of terms like uk∂uj∂x

is Πdi=1C(Pi, Qi). Let us assume all the

RVs having the same P and Q. This formula can be written as (C(P,Q))d.

When P is sufficiently large (for simplicity we assume Pi = P,Qi = Q for all

i = 1, 2, ..., d), the ratio for complexity of WM to gPC is approximately C(P,Q)d

(P+1)3d,

ignoring lower order terms on P . We computed the ratio of complexity in figure 4.11

for d = 2, 3.

Besides figure 4.11, we also want to point out the following observation. From

figure 4.5 we observed numerically that when Q ≥ P−1, the error from WM method

with polynomial order P is at the same order as the error from gPC with polynomial

order P . So let us consider the computational cost ratio C(P,Q)d

(P+1)3dbetween the two

methods for WM with order Q = P − 1, and gPC with order P , in table (4.2).

We conclude from figure 4.11 and table (4.2) that: 1) the larger the P , the bigger

Page 106: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

83

0 0.5 1 1.5 2 2.5 3 3.5 40

2000

4000

6000

8000

10000

12000

Q

num

ber o

f uk u

mx

term

s

P=2P=3P=4

0 0.5 1 1.5 2 2.5 3 3.5 4102

103

104

105

106

107

Q

num

ber o

f uk u

mx

term

s

P=2P=3P=4

Figure 4.11: The total number of terms as um1...md∂∂x uk1+2p1−m1,...,kd+2pd−mdKm1,k1+2p1−m1,p1

...Kmd,kd+2pd−md,pd in the WM propagator for Burgers equation with d RVs, as C(P,Q)d: fordimensions d = 2 (left) and d = 3 (right). Here we assume P1 = ... = Pd = P and Q1 = ... = Qd =Q.

Table 4.2: Computational complexity ratio to evaluate u∂u∂x term in Burgers equation with d RVs

between WM and gPC, as C(P,Q)d

(P+1)3d: here we take the WM order as Q = P − 1, and gPC with order

P , in different dimensions d = 2, 3, and 50.

C(P,Q)d

(P+1)3dP = 3, Q = 2 P = 4, Q = 3 P = 5, Q = 4

d=2 250046 ≈ 61.0% 10201

56 ≈ 65.3% 3132966 ≈ 67.2%

d=3 1250049 ≈ 47.7% 1030301

59 ≈ 52.8% 554523369 ≈ 55.0%

d=50 8.89e+844150 ≈ 0.000436% 1.64e+100

5150 ≈ 0.0023% 2.5042e+1126150 ≈ 0.0047%

Page 107: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

84

the cost ratio between WM to gPC (C(P,Q)d

(P+1)3d); 2) the higher the dimensions, for the

same order P and Q, the lower the ratio C(P,Q)d

(P+1)3d. In other words, the higher the

dimensions, the less WM is going to cost than gPC for the same accuracy.

4.7 Conclusions

We presented a new Wick-Malliavin expansion to approximate polynomial non-

linear terms in SPDEs with random inputs of arbitrary discrete measure with fi-

nite moments, on which orthogonal polynomials can be constructed numerically

[141, 50, 59, 24, 139]. Specifically, we derived WM propagators for a stochastic

reaction equation and a Burgers equation in equation (4.27) and (4.32) with multi-

ple discrete RVs. The error was effectively improved by at least two to eight orders

of magnitude when the WM order Q was increased into a larger integer in figure 4.1

and 4.4. Linear and nonlinear SPDEs with multiple RVs were considered in figure

4.2 and 4.7 as the first step towards application of WM method to nonlinear SPDEs

with stochastic processes, such as Levy processes with jumps. We found the smallest

WM order Q for gPC polynomial order P in WM method to be Q = P − 1 in order

to achieve the same order of magnitude of error in gPC with polynomial order P or

PCM with (P + 1) collocation points, by computing the Burgers equation with one

Poisson RV in figure 4.5. When Q was larger than (P−1), the error remained almost

constant as in figure 4.6. We proposed an adaptive WM method in section 3.5, by

increasing the gPC order P and the WM order Q as a possible solution to control

the error growth in long-term integration in gPC, shown in figure 4.8 and 4.9. With

Q = P − 1 we estimated and compared the computational complexity between the

WM method and gPC for a stochastic Burgers equation with d RVs in section 3.5.

The WM method required much less computational complexity than gPC, especially

Page 108: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

85

in higher dimensions, as in table 4.2. However WM is still more expensive than PCM

or sparse PCM.

Page 109: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

Chapter Five

Numerical methods for SPDEs

with 1D tempered α-stable (TαS)

processes

Page 110: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

87

We develop new probabilistic and deterministic approaches for moment statistics of

stochastic partial differential equations (SPDEs) with pure jump tempered α-stable

(TαS) Levy processes. With the CP approximation or the series representation of

the TαS process, we simulate the moment statistics of stochastic reaction-diffusion

equations with additive TαS white noises by the probability collocation method

(PCM) and the Monte Carlo (MC) method. PCM is shown to be more efficient and

accurate than MC in relatively low dimensions. Then as an alternative approach,

we solve the generalized Fokker-Planck (FP) equation that describes the evolution

of the density for stochastic overdamped Langevin equations to obtain the density

and the moment statistics for the solution following two different approaches. First,

we solve an integral equation for the density by approximating the TαS processes as

CP processes; second, we directly solve the tempered fractional PDE (TFPDE). We

show that the numerical solution of TFPDE achieves higher accuracy than PCM at

a lower cost and we also demonstrate agreement between the histogram from MC

and the density from the TFPDE.

5.1 Literature review of Levy flights

The Kolmogorov scaling law of turbulences [96] assumes the turbulence as a stochas-

tic Gaussian process in small scales [160]. However, experimental data shows that

dissipation quantities become more non-Gaussian when the scale decreased and when

the Reynolds number increased [17]. At finite Reynolds numbers, non-Guassianness

was observed in velocity profiles [178], pressure profiles [175], and acceleration pro-

files [147]. Experimentally, Levy flights from one vortex to another, sticking events

on one vortex, and power-law growth with time in the variance of displacement was

observed on a tracer particle in a time-periodic laminar flow [166]. The complimen-

Page 111: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

88

tary sticking events and Levy flights are known as intermittency [146]. Theoretically,

by assuming a uniform distribution of vortices in R3, the velocity profile of the frac-

tal turbulence [114] is shown to be a stable distribution with characteristic exponent

D/2 [170], where D is the fractal dimension of the turbulence [115]. However, Levy

flights are not directly applicable to real dynamical processes of turbulence [161].

One must consider the time spent on the completion of jumps from one vortex to

another in the Levy walk model [159, 160]. The Richardson’s 4/3 law of turbulence

is derived from the Levy walk model and a memory function based on Kolmogorov

scaling [160], where that derivation from the Levy flight model has been unsatis-

factory [124]. Levy flights are related to the symmetry of the dynamic system in

the phase space [161]: arbitrary weak perturbations (such as non-uniformity in tem-

perature, Ekman pumping, and finite-size particle effects) of quasisymmetric steady

flows destroy the separatrix grids and generate stochastic webs of finite thickness

(for streamlines and velocity fields [191]), where the streamlines randomly travel at

the cross-sections on the webs from one stable region (island) to another in pre-

turbulent states [19]. The first indirect experimental evidence of Levy flights/walks

is observed from the self-similarity (of stable law) in the concentration profile of a

linear array of vortices, in the subdiffusion diffusion regime where the sticking dom-

inates, both around the vortices and in the boundary layers [28], in agreement with

the theory [145]. Direct experimental evidence of Levy flights/walks and superdif-

fusion, where the Levy flights dominate, is observed on a large number of tracers

in a two-dimensional flow [166]: in pre-turbulent states, the more random the flow,

the more frequent and random the tracer switches between the sticking events and

the Levy flights; in turbulence, tracers wander so erratically that no flights can be

defined [165].

Page 112: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

89

5.2 Notation

Lt, ηt Levy processes(c, λ, α) coefficients in tempered α-stable distributions (TαS)N(t, U) Poisson random measure

I indicator functionE expectationΓ gamma functionν Levy measure

N compensated Poisson random measureQcp number of truncations in the compound Poisson approximationQs number of truncations in the series representationF cumulative distribution functionf probability density functions number of samples in Monte Carlo simulation (MC)γinc incomplete gamma functiond number of quadrature points in probability collocation methods (PCM)Sk characteristic function of a Levy process

−∞Dαx left Riemann-Liouville fractional derivative

xDα+∞ right Riemann-Liouville fractional derivative

−∞Dα,λx left Riemann-Liouville tempered fractional derivative

xDα,λ+∞ right Riemann-Liouville tempered fractional derivative

5.3 Stochastic models driven by tempered stable

white noises

We develop and compare different numerical methods to solve two stochastic models

with tempered α-stable (TαS) Levy white noises: a reaction-diffusion equation and

an overdamped Langevin equation with TαS white noises, including stochastic simu-

lation methods such as the MC [35, 142] and the PCM [9, 184]. We also simulate the

density of the overdamped Langevin equation through its generalized FP equation

formulated as TFPDE.

Page 113: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

90

We first solve the following stochastic reaction-diffusion model via stochastic

simulation methods (MC and PCM) in the Ito sense:

du(t, x;ω) = (∂

2u∂x2

+ µu)dt+ εdLt(ω), x ∈ [0, 2]

u(t, 0) = u(t, 2) periodic boundary condition

u(0, x) = u0(x) = sin(π2x) initial condition

(5.1)

where Lt(ω) is one-dimensional TαS process (also known as CGMY process in fi-

nance) [29, 30].

The second model is one-dimensional stochastic overdamped Langevin equation

in the Ito sense [38, 80]:

dx(t;ω) = −σx(t;ω)dt+ dLt(ω), x(0) = x0, (5.2)

where Lt(ω) is also a one-dimensional TαS process. It describes an overdamped

particle in an external potential driven by additive TαS white noise. This equation

was introduced in [103] to describe the stochastic dynamics in fluctuating environ-

ments for Gaussian white noise, such as classical mechanics [66], biology [77], and

finance [35]. When Lt(ω) is a Levy process, the solution is a Markov process and

its probability density satisfies a closed equation such as the differential Chapman-

Kolmogorov equation [58] or the generalized FP equation [151]. When Lt(ω) is a

TαS Levy process, the corresponding generalized FP equation is a TFPDE [38].

Page 114: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

91

5.4 Background of TαS processes

TαS processes were introduced in statistical physics to model turbulence, e.g., the

truncated Levy flight model [97, 118, 135], and in mathematical finance to model

stochastic volatility, e.g., the CGMY model [29, 30]. Here, we consider a symmet-

ric TαS process (Lt) as a pure jump Levy martingale with characteristic triplet

(0, ν, 0) [21, 157] (no drift and no Gaussian part). The Levy measure is given by [35]

1:

ν(x) =ce−λ|x|

|x|α+1, 0 < α < 2. (5.3)

This Levy measure can be interpreted as an Esscher transformation [62] from that

of a stable process with exponential tilting of the Levy measure. The parameter

c > 0 alters the intensity of jumps of all given sizes; it changes the time scale of

the process. Also, λ > 0 fixes the decay rate of big jumps, while α determines the

relative importance of smaller jumps in the path of the process2. The probability

density for Lt at a given time is not available in a closed form (except when α = 12

3).

The characteristic exponent for Lt is [35]:

Φ(s) = s−1 log E[eisLs ] = 2Γ(−α)λαc[(1− is

λ)α − 1 +

isα

λ], α 6= 1, (5.4)

where Γ(x) is the Gamma function and E is the expectation. By taking the deriva-

tives of the characteristic exponent we obtain the mean and variance:

E[Lt] = 0, V ar[Lt] = 2tΓ(2− α)cλα−2. (5.5)

1In a more generalized form, Levy measure is ν(x) = c−e−λ−|x|

|x|α+1 Ix<0 + c+e−λ+|x|

|x|α+1 Ix>0. We may

have different coefficients c+, c−, λ+, λ− on the positive and the negative jump parts.2In the case when α = 0, Lt is the gamma process.3See inverse Gaussian processes.

Page 115: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

92

In order to derive the second moments for the exact solutions of Equations (5.1) and

(5.2), we introduce the Ito isometry. The jump of Lt is defined by 4Lt = Lt − Lt− .

We define the Poisson random measure N(t, U) as [78, 133, 137]:

N(t, U) =∑

0≤s≤t

I4Ls∈U , U ∈ B(R0), U ⊂ R0. (5.6)

Here R0 = R\0, and B(R0) is the σ-algebra generated by the family of all Borel

subsets U ⊂ R, such that U ⊂ R0; IA is an indicator function. The Poisson random

measure N(t, U) counts the number of jumps of size 4Ls ∈ U at time t. In order

to introduce the Ito isometry, we define the compensated Poisson random measure

N [78] as:

N(dt, dz) = N(dt, dz)− ν(dz)dt = N(dt, dz)− E[N(dt, dz)]. (5.7)

The TαS process Lt (as a martingale) can be also written as:

Lt =

∫ t

0

∫R0

zN(dτ, dz). (5.8)

For any t, let Ft be the σ-algebra generated by (Lt, N(ds, dz)), z ∈ R0, s ≤ t. We

define the filtration to be F = Ft, t ≥ 0. If a stochastic process θt(z), t ≥ 0, z ∈ R0

is Ft-adapted, we have the following Ito isometry [133]:

E[(

∫ T

0

∫R0

θt(z)N(dt, dz))2] = E[

∫ T

0

∫R0

θ2t (z)ν(dz)dt]. (5.9)

Equations (5.1) and (5.2) are understood in the Ito sense. The solutions are stochas-

tic Ito integrals over the TαS processes Lt [149], such as∫ T

0f(t)dLt, with the Levy

measure given in Equation (5.3). Thus, by applying Equation (5.8), the second

Page 116: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

93

moment can be derived using the Levy measure:

E[(

∫ T

0

f(t)dLt)2] = E[(

∫ T

0

∫R0

f(t)zN(dt, dz))2] = E[

∫ T

0

∫R0

f 2(t)z2ν(dz)dt].

(5.10)

Both Equations (5.1) and (5.2) contain an additive white noise Lt of a TαS process.

Details of white noise theory for Levy processes with applications to SPDEs and

finance can be found in [20, 134, 108, 109, 138]. The white noise of a Poisson random

measure takes values in a certain distribution space. It is constructed via a chaos

expansion for Levy processes with kernels of polynomial type [134], and defined as

a chaos expansion in terms of iterated integrals with respect to the compensated

Poisson measure N(dt, dz) [82].

For simulations of TαS Levy processes, we do not know the distribution of incre-

ments explicitly [35], but we may still simulate the trajectories of TαS processes by

the random walk approximation [11]. However, the random walk approximation does

not identify the jump time and size of the large jumps precisely [153, 154, 155, 156].

In the heavy tailed case, large jumps contribute more than small jumps in functionals

of a Levy process. Therefore, in this case, we have mainly used two other ways to sim-

ulate the trajectories of a TαS process numerically: CP approximation [35] and series

representation [154]. In the CP approximation, we treat the jumps smaller than a

certain size δ by their expectation, and treat the remaining process with larger jumps

as a CP process [35]. There are six different series representations of Levy jump pro-

cesses. They are the inverse Levy measure method [49, 94], LePage’s method [104],

Bondesson’s method [25], thinning method [154], rejection method [153], and shot

noise method [154, 155]. In this paper, for TαS processes, we will use the shot noise

representation for Lt as a series representation method because the tail of Levy mea-

sure of a TαS process does not have an explicit inverse [156]. Both the CP and the

series approximation converge slowly when the jumps of the Levy process are highly

Page 117: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

94

concentrated around zero, however both can be improved by replacing the small

jumps via Brownian motions [7]. The α-stable distribution was introduced to model

the empirical distribution of asset prices [116], replacing the normal distribution.

The empirical distribution of asset prices is not always in a stable distribution or a

normal distribution. The tail is heavier than a normal distribution and thinner than

a stable distribution [22]. Therefore, the TαS process was introduced as the CGMY

model to modify the Black and Scholes model.

In the past literature, the simulation of SDEs or functionals of TαS processes

was mainly done via MC [142]. MC for functionals of TαS processes is possible after

a change of measure that transform TαS processes into stable processes [144].

5.5 Numerical simulation of 1D TαS processes

In general there are three ways to generate a Levy process [154]: random walk ap-

proximation, series representation and CP approximation. For a TαS process, the

distribution of increments is not explicitly known (except for α = 1/2) [35]. There-

fore, in the sequel we discuss the CP approximation and the series representation for

a TαS process.

5.5.1 Simulation of 1D TαS processes by CP approximation

In the CP approximation, we simulate the jumps larger than δ as a CP process

and replace jumps smaller than δ by their expectation as a drift term [35]. Here

we explain the method to approximate a TαS subordinator Xt (without a Gaussian

Page 118: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

95

part and a drift) with the Levy measure ν(x) = ce−λx

xα+1 Ix>0 (positive jumps only); this

method can be generalized to a TαS process with both positive and negative jumps.

The CP approximation Xδt for this TαS subordinator Xt is:

Xt ≈ Xδt =

∑s≤t

4XsI4Xs≥δ+E[∑s≤t

4XsI4Xs<δ] =∞∑i=1

Jδi It≤Ti+bδt ≈

Qcp∑i=1

Jδi It≤Ti+bδt,

(5.11)

We introduce Qcp here as the number of jumps occurred before time t. The first

term∑∞

i=1 Jδi It≤Ti is a compound Poisson process with jump intensity

U(δ) = c

∫ ∞δ

e−λxdx

xα+1(5.12)

and jump size distribution pδ(x) = 1U(δ)

ce−λx

xα+1 Ix≥δ for Jδi . The jump size random

variables (RVs) Jδi are generated via the rejection method [41]. Here is a brief

description of an algorithm to generate RVs with distribution pδ(x) = 1U(δ)

ceλx

xα+1 Ix≥δ

for CP approximation, by the rejection method. The distribution pδ(x) can be

bounded by

pδ(x) ≤ δ−αe−λδ

αU(δ)f δ(x), (5.13)

where f δ(x) = αδ−α

xα+1 Ix≥δ. The algorithm is [35, 41]:

REPEAT

Generate RVs W and V : independent and uniformly distributed on [0, 1]

Set X = δW−1/α

Set T = fδ(X)δ−αe−λδ

pδ(X)αU(δ)

UNTIL V T ≤ 1

RETURN X .

Here, Ti is the i-th jump arrival time of a Poisson process with intensity U(δ).

The accuracy of CP approximation method can be improved by replacing the smaller

Page 119: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

96

jumps by a Brownian motion [7], when the growth of the Levy measure near zero

is fast. The second term functions as a drift term, bδt, resulted from truncating

the smaller jumps. The drift is bδ = c∫ δ

0e−λxdxxα

. This integration diverges when

α ≥ 1, therefore the CP approximation method only applies to TαS processes with

0 < α < 1. In this paper, both the intensity U(δ) and drift bδ are calculated

via numerical integrations with Gauss-quadrature rules [59] with a specified relative

tolerance (RelTol) 4. In general, there are two algorithms to simulate a compound

Poisson process [35]: the first method is to simulate the jump time Ti by exponentially

distributed RVs and take the number of jumps Qcp as large as possible; the second

method is to first generate and fix the number of jumps, then generate the jump time

by uniformly distributed RVs on [0, t]. Algorithms for simulating a CP process (the

second kind) with intensity and the jump size distribution in their explicit forms are

known on a fixed time grid [35]. Here we describe how to simulate the trajectories of a

CP process with intensity U(δ) and jump size distribution νδ(x)U(δ)

, on a simulation time

domain [0, T ] at time t. The algorithm to generate sample paths for CP processes is

given below.

• Simulate an RV N from Poisson distribution with parameter U(δ)T , as the

total number of jumps on the interval [0, T ].

• Simulate N independent RVs, Ti, uniformly distributed on the interval [0, T ],

as jump times.

• Simulate N jump sizes, Yi with distribution νδ(x)U(δ)

.

• Then the trajectory at time t is given by∑N

i=1 ITi≤tYi.

In order to simulate the sample paths of a symmetric TαS process with a Levy

4The RelTol of numerical integration is defined as |q−Q||Q| , where q is the computed value of the

integral and Q is the unknown exact value.

Page 120: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

97

measure given in Equation (5.3), we generate two independent TαS subordinators

via the CP approximation and subtract one from the other. The accuracy of the CP

approximation is determined by the jump truncation size δ.

5.5.2 Simulation of 1D TαS processes by series representa-

tion

Let εj, ηj, and ξj be sequences of i.i.d. RVs such that P(εj = ±1) = 1/2, ηj ∼

Exponential(λ), and ξj ∼Uniform(0, 1). Let Γj be arrival times in a Poisson

process with rate one. Let Uj be i.i.d. uniform RVs on [0, T ]. Then, a TαS

process Lt with Levy measure given in Equation (5.3) can be represented as [156]:

Lt =+∞∑j=1

εj[(αΓj2cT

)−1/α ∧ ηjξ1/αj ]IUj≤t, 0 ≤ t ≤ T. (5.14)

Equation (5.14) converges almost surely as uniformly in t [153]. In numerical simu-

lations, we truncate the series in Equation (5.14) up to Qs terms. The accuracy of

series representation approximation is determined by the number of truncations Qs.

5.5.3 Example: simulation of inverse Gaussian subordina-

tors by CP approximation and series representation

In order to compare the numerical performance of CP approximation and series

representation of TαS processes, we simulate the trajectories of an inverse Gaussian

(IG) subordinator by the two methods. An IG subordinator is a TαS subordinator

Page 121: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

98

with a Levy measure (with one-sided jumps, α = 1/2) as:

νIG =ce−λx

x3/2Ix>0. (5.15)

The probability density function (pdf) at time t for an IG subordinator is known to

be [35]:

pt(x) =ct

x3/2e2ct√πλe−λx−πc

2t2/x, x > 0. (5.16)

We perform the one-sample Kolmogorov-Smirnov statistic (K-S test) [120] between

the empirical cumulative distribution function (CDF) and the exact reference CDF:

KS = supx|Fem(x)− Fex(x)|, x ∈ supp(F ). (5.17)

This one-sample K-S test quantifies a distance between the exact inverse Gaussian

process and the approximated one (by the CP approximation or the series represen-

tation).

In Figures 5.1 and 5.2, we plot the empirical histograms (with the area normal-

ized to one) of an IG subordinator at time t, simulated via the CP approximation

with different small jump truncation sizes δ (explained in Section 2.1) and via the

series representation with different numbers of truncations in the series Qs (explained

in Section 2.2), against the reference PDF in Equation (5.16). We observe that the

empirical histograms fit the reference PDF better when δ → 0 in the CP approxima-

tion in Figure 5.1 and when Qs increases in the series representation. The quality of

fitting is shown quantitatively via the K-S test values given in each plot.

In both Figures 5.1 and 5.2, we run one million samples on 1000 bins for each

histogram (known as the square-root choice [174]). We zoom in and plot the parts

of histograms on [0, 1.8] to examine how smaller jumps are captured. We observe

Page 122: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

99

0 1 2 3 4 50

20

40

60

80

100

x

p t(x)

CP b=0.1reference PDF

0 0.5 1 1.50

0.5

1

1.5

2

2.5

CP b=0.1reference PDF

KS=0.152843

0 1 2 3 40

0.5

1

1.5

2

2.5

3

3.5

x

p t(x)

CP b=0.02reference PDF

0 0.5 1 1.50

0.5

1

1.5

2

2.5

CP b=0.02reference PDF

KS=0.009250

0 1 2 3 40

0.5

1

1.5

2

x

p t(x)

CP b=0.005reference PDF

0 0.5 1 1.50

0.5

1

1.5

2

CP b=0.005reference PDF

KS=0.003414

Figure 5.1: Empirical histograms of an IG subordinator (α = 1/2) simulated viathe CP approximation at t = 0.5: the IG subordinator has c = 1, λ = 3; each simulation contains

s = 106 samples (we zoom in and plot x ∈ [0, 1.8] to examine the smaller jumps approximation);they are with different jump truncation sizes as δ = 0.1 (left, dotted, CPU time 1450s), δ = 0.02(middle, dotted, CPU time 5710s), and δ = 0.005 (right, dotted, CPU time 38531s). The referencePDFs are plotted in red solid lines; the one-sample K-S test values are calculated for each plot; theRelTol of integration in U(δ) and bδ is 1 × 10−8. These runs were done on Intel (R) Core (TM)i5-3470 CPU @ 3.20 GHz in Matlab.

0 1 2 3 4 50

0.5

1

1.5

2

2.5

x

p t(x)

series rep Q=10reference PDF

0 0.5 1 1.50

0.5

1

1.5

2

2.5

series rep Q=10reference PDF

KS=0.360572

0 1 2 3 4 50

0.5

1

1.5

2

x

p t(x)

series rep Q=100reference PDF

0 0.5 1 1.50

0.5

1

1.5

2

series rep Q=100reference PDF

KS=0.078583

0 1 2 3 40

0.5

1

1.5

2

x

p t(x)

series rep Q=800reference PDF

0 0.5 1 1.5 20

0.5

1

1.5

2

series rep Q=800reference PDF

KS=0.040574

Figure 5.2: Empirical histograms of an IG subordinator (α = 1/2) simulated viathe series representation at t = 0.5: the IG subordinator has c = 1, λ = 3; each simulation is

done on the time domain [0, 0.5] and contains s = 106 samples (we zoom in and plot x ∈ [0, 1.8] toexamine the smaller jumps approximation); they are with different number of truncations in theseries as Qs = 10 (left, dotted, CPU time 129s), Qs = 100 (middle, dotted, CPU time 338s), andQs = 1000 (right, dotted, CPU time 2574s). The reference PDFs are plotted in red solid lines; theone-sample K-S test values are calculated for each plot. These runs were done on Intel (R) Core(TM) i5-3470 CPU @ 3.20 GHz in Matlab.

Page 123: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

100

that in both Figures 5.1 and 5.2 when δ is large or Qs is small, the distribution of

small jumps is not well approximated. Therefore, both methods suffer from accuracy

if smaller jumps have a big contribution to the solution of SPDEs, e.g., when α or

λ is large. Furthermore, when δ is large in the CP approximation (see δ = 0.1 in

Figure 5.1), the big jumps are well approximated although the small ones are not;

when Qs is small in the series representation, neither big or small jumps are not well

approximated (see Qs = 10 in Figure 5.2). When the cost is limited, this shows an

advantage of using the CP approximation, when big jumps have a larger contribution

to the solution of SPDEs.

5.6 Simulation of stochastic reaction-diffusion model

driven by TαS white noises

In this section, we will provide numerical results for solving the stochastic reaction-

diffusion Equation (5.1). We will perform and compare four stochastic simulation

methods to obtain the statistics: MC with CP approximation (MC/CP), MC with

series representation (MC/S), PCM with CP approximation (MC/CP), and PCM

with series representation (PCM/S).

The integral form of Equation (5.1) is:

u(t, x) = eµt−π2

4t sin(

π

2x) + εeµt

∫ t

0

e−µτdLτ , x ∈ [0, 2], (5.18)

where the stochastic integral is an Ito integral over a TαS process [149]. The mean

of the solution is

Eex[u(t, x)] = eµt−π2

4t sin(

π

2x). (5.19)

Page 124: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

101

By the Ito isometry [133] and Equation (5.19), the second moment of the solution is

Eex[u2(t, x;ω)] = e2µt−π

2

2t sin2(

π

2x) +

cε2e2µt

µλ2−α (1− e−2µt)Γ(2− α). (5.20)

Let us define the L2 norm of the error in the second moment l2u2(t) to be

l2u2(t) =||Eex[u

2(x, t;ω)]− Enum[u2(x, t;ω)]||L2([0,2])

||Eex[u2(x, t;ω)]||L2([0,2])

, (5.21)

where Enum[u2(x, t;ω)] is the second moment evaluated by numerical simulations.

5.6.1 Comparing CP approximation and series representa-

tion in MC

First we will compare the accuracy and convergence rate between MC/CP and MC/S

in solving (5.1) by MC. In MC, we generate the trajectories of Lt (a TαS process

with the Levy measure given in Equation (5.3)) on a fixed time grid with st the

number of time steps (t0 = 0, t1, t2, ..., tst = T). We solve Equation (5.1) via the

first-order Euler’s method [142] in the time direction with a time step4t = tn+1−tn:

un+1 − un = (∂2un

∂x2+ µun)4t+ ε(Ltn+1 − Ltn). (5.22)

We discretize the space by Nx = 500 Fourier collocation points [73] on the domain

[0, 2].

In Table 5.1, we plot the l2u2 errors at a fixed time T versus the sample size s

by the MC/CP and the MC/S, for λ = 10 (upper) and for λ = 1 (lower, with a less

tempered tail). First for the cost, the MC/CP costs less CPU time than the MC/S,

Page 125: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

102

e.g., when λ = 10 in Table 5.1, the MC/S with Qs = 10 and s = 65536 takes twice

the CPU time as the MC/CP with δ = 0.01 and s = 65536 even though the MC/CP

is more accurate. Second, to assess the accuracy of the methods, the MC/CP is more

accurate than MC/S even though it takes about half the CPU time, e.g. the MC/CP

with δ = 0.01 and the MC/S with Qs = 10. Third, we observe that decreasing δ

in the MC/CP to improve the accuracy is more effective with a small s when more

smaller jumps are present (larger λ). For example: when λ = 10, δ = 0.01 starts

to be more accurate than δ = 0.1 when s = 1024; when λ = 10, δ = 0.01 starts to

be more accurate than δ = 0.1 when s = 65536. This can be explained by the fact

that large jumps have a greater contribution to the solution and decreasing δ in the

MC/CP makes a great difference in sampling smaller jumps as in Figure 5.1.

Table 5.1: MC/CP vs. MC/S: error l2u2(T ) of the solution for Equation (5.1) versus the numberof samples s with λ = 10 (upper) and λ = 1 (lower). T = 1, c = 0.1, α = 0.5, ε = 0.1, µ = 2(upper and lower). Spatial discretization: Nx = 500 Fourier collocation points on [0, 2]; temporaldiscretization: first-order Euler scheme in (5.22) with time steps 4t = 1 × 10−5. In the CPapproximation: RelTol = 1× 10−8 for integration in U(δ).

s (λ = 10) 256 1024 4096 16384 65536 262144

MC/S Qs = 10 3.9× 10−3 6.0× 10−4 1.6× 10−4 6.8× 10−5 2.3× 10−5 3.5× 10−6

MC/CP δ = 0.1 5.4× 10−4 6.2× 10−4 6.3× 10−4 4.3× 10−4 4.3× 10−4 4.5× 10−4

MC/CP δ = 0.01 3.6× 10−4 1.8× 10−5 9.8× 10−5 1.3× 10−5 3.5× 10−6 2.0× 10−5

s (λ = 1) 256 1024 4096 16384 65536 262144

MC/S Qs = 10 1.7× 10−2 1.4× 10−2 6.1× 10−3 7.6× 10−3 4.4× 10−3 6.6× 10−4

MC/CP δ = 0.1 1.8× 10−3 4.9× 10−3 2.4× 10−3 2.5× 10−3 5.1× 10−4 2.7× 10−4

MC/CP δ = 0.01 8.6× 10−3 3.8× 10−3 5.8× 10−3 2.0× 10−3 1.1× 10−4 3.6× 10−5

5.6.2 Comparing CP approximation and series representa-

tion in PCM

Next, we will compare the accuracy and efficiency between PCM/CP and PCM/S

in solving (5.1). In order to evaluate the moments of solutions, PCM [184], as

Page 126: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

103

an integration method on the random space, is based on the Gauss-quadrature

rules [59]. Suppose the solution is a function of a finite number of independent RVs

(Y 1, Y 2, ..., Y n)) as v(Y 1, Y 2, ..., Y n), the m-th moment of the solution is evaluated

by

E[vm(Y 1, Y 2, ..., Y n)] =

d1∑i1=1

...

dn∑in=1

vm(y1i1, y2i2, ..., ynin)w1

i1...wnin , (5.23)

where wjij and yjij are the ij-th Gauss-quadrature weight and collocation point for

Y j respectively. The simulations are run on (Πni=1di) deterministic sample points

(y1i1, ..., ynin) in the n-dimensional random space. In the CP approximation, the TαS

process Lt is approximated via Lt ≈∑Qcp

i=1 Jδi It≤Ti + bδt, where Qcp is the number of

jumps we consider. As we mentioned in Section 2.1 there are two ways to simulate a

compound Poisson process. Here we treat the number of jumps Qcp as a modeling pa-

rameter by the CP approximation and simulate the time between two jumps Ti+1−Ti

by exponentially distributed RVs with intensity U(δ). The PCM/CP method con-

tains two parameters: the jump truncation size δ and the number of jumps we

consider Qcp. Therefore, the PCM/CP simulations of problem (5.1) are run on the

collocation points for RVs Jδi and Ti in the 2Qcp-dimensional random space (with

d2Qcp sample points); Qcp is the number of jumps truncated. In the series representa-

tion, the TαS process Lt is approximated via Lt ≈∑Qs

j=1 εj[(αΓj2cT

)−1/α ∧ ηjξ1/αj ]IUj≤t

on the simulation domain [0, T ]. To reduce the number of RVs (therefore, to de-

crease the number of dimensions in the random space), we calculate the distribution

of [(αΓj2cT

)−1/α∧ηjξ1/αj ] for a fixed j and treat it as one RV for each j. The distribution

of Aj is calculated by the following:

FAj(A) = P

((αΓj2cT

)−1/α ≤ A

)= P

(Γj ≥

2cT

αAα

)=

∫ +∞

2cTαAα

e−xx−1+j

Γ(j)dx, (5.24)

Page 127: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

104

Therefore, the distribution of A is

fAj(A) =dFAdA

=2cT

Γ(j)Aα+1e−

2cTαAα

(2cT

αAα

)−1+j

. (5.25)

The distribution of Bj is derived by product distribution:

fBj(B) = αλ

∫ 1

0

xα−2e−λB/xdx = (αλ)(λB)α−1

∫ ∞λB

t−αe−tdt (5.26)

when α 6= 1, it can be written as incomplete Gamma functions.

Therefore, the distribution of [Aj ∧Bj] is given by

fAj∧Bj(x) = fAj(x)

(1− FBj(x)

)+ fBj(x)

(1− FAj(x)

). (5.27)

When 0 < α < 1,

fAj∧Bj(x) =

xΓ(j)e−ttj|t= 2cT

αxα

)[αΓ(1− α)λα

∫ +∞

x

(1− γinc(λz, 1− α))zα−1dz

]+

[αΓ(1− α)λα(1− γinc(λx, 1− α)xα−1)

]γinc(

2cT

αxα, j).

(5.28)

When 1 < α < 2,

fAj∧Bj(x) =

xΓ(j)e−ttj|t= 2cT

αxα

)[∫ +∞

x

fBj(z)dz

]+ fBj(x)γinc(

2cT

αxα, j). (5.29)

Page 128: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

105

Here the incomplete Gamma function γinc(a, b) is defined as:

γinc(a, b) =1

Γ(a)

∫ b

0

e−tta−1dt. (5.30)

Therefore, the PCM/S simulations under the series representation are run on the

quadrature points for RVs εj, [(αΓj2cT

)−1/α ∧ ηjξ1/αj ], and Uj in the 3Qs-dimensional

random space (with d3Qs sample points). In the sequel, we generate the stochastic

collocation points numerically based on the moments [139]. The stochastic colloca-

tion points are generated by the Gaussian quadrature rule [65]. Alternative methods

can also be used such as the Stieltjes’ method and the modified Chebyshev method

[60]. Here, we assume each RV has the same number of collocation points d.

However, typically for this problem (5.1) we only need d(Qcp + 1) sample points

in PCM/CP instead of d2Qcp and only dQs sample points in PCM/S instead of d3Qs .

Using the CP approximation given in Equation (5.11), the second moment of the

solution in (5.18) can be approximated by

E[u2(t, x;ω)] ≈ e2µt− 12π2t sin2(

π

2x) + ε2e2µtE[(Jδ1 )2]

Qcp∑i=1

E[e−2µTi ]. (5.31)

Using the series representation given in Equation (5.14), the second moment of the

solution in (5.18) can be approximated by

E[u2(t, x;ω)] ≈ e2µt− 12π2t sin2(

π

2x)+ε2e2µt 1

2µT(1−e−2µT )

Qs∑j=1

E[((αΓj2cT

)−1/α∧ηjξ1/αj )2].

(5.32)

Here we sample the moments of solution directly from Equation (5.31) for the

PCM/CP and Equation (5.32) for the PCM/S, therefore we significantly decrease the

sample size with the integral form of the solution in Equation (5.18). For example,

Page 129: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

106

in this typical problem we may evaluate E[e−2µTi ] for each i separately in Equation

(5.31). Indeed, such reduction of the number of samples in the PCM method is pos-

sible whenever the following condition can be met. Suppose we have Q independent

R.V.s Zi, i = 1, ..., Q. If the expectation of a functional of Zi, i = 1, ..., Q is a

functional of expectation of some function of each Zi separately:

E[F (Z1, ..., Zd)] = G(E[f1(Z1)], ..., E[fd(Zd)]), (5.33)

we may evaluate each E[fi(Zi)] ‘separately’ via the PCM with d collocation points.

In this way, we reduce the number of samples from dQ to dQ.

In Figure 5.3, we plot the l2u2(T ) errors of the solution for Equation (5.1) versus

the number of jumps Qcp (via PCM/CP) or Qs (via PCM/S). In order to investigate

the Qcp and Qs convergence, we apply a sufficient number of collocation points for

each RV until the integration is up to a certain RelTol. We observe three things in

Figure 5.3.

1. For smaller values of Qs and Qcp, PCM/S is more accurate and converges faster

than PCM/CP, because bigger jumps contribute more to the solution and

PCM/S samples bigger jumps more efficiently than PCM/CP as we observed

in Figures 5.1 and 5.2.

2. For intermediate values of Qs and Qcp, the convergence rate of PCM/S slows

down but the convergence rate of PCM/CP speeds up, because the contribution

of smaller jumps starts to affect the accuracy since the PCM/CP samples the

smaller jumps faster than PCM/S.

3. For larger values of Qs and Qcp, both PCM/CP and PCM/S stop converging

due to their own limitations to achieve higher accuracy.

Page 130: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

107

0 5 10 15 20 25 30 3510−18

10−16

10−14

10−12

10−10

10−8

10−6

Qcp or Q

s

l2u2(T=1)

PCM/CP b=1×10−1

PCM/CP b=1×10−2

PCM/CP b=1×10−3

PCM/CP b=1×10−4

PCM/CP b=1×10−5

PCM/S

0 5 10 15 20 25 30 35

10−15

10−10

10−5

Qcp or Q

s

l2u2(T=1)

PCM/CP b=1×10−1

PCM/CP b=1×10−2

PCM/CP b=1×10−3

PCM/CP b=1×10−4

PCM/CP b=1×10−5

PCM/S

Figure 5.3: PCM/CP vs. PCM/S: error l2u2(T ) of the solution for Equation (5.1) versus thenumber of jumps Qcp (in PCM/CP) or Qs (in PCM/S) with λ = 10 (left) and λ = 1 (right). T = 1,c = 0.1, α = 0.5, ε = 0.1, µ = 2, Nx = 500 Fourier collocation points on [0, 2] (left and right). Inthe PCM/CP: RelTol = 1× 10−10 for integration in U(δ). In the PCM/S: RelTol = 1× 10−8 for

the integration of E[((αΓj2cT )−1/α ∧ ηjξ1/α

j )2].

The limitations of PCM/CP and PCM/S are:

• in the PCM/CP when δ is small, the integration to calculate U(δ) = c∫∞δ

e−λxdxxα+1

is less accurate because of the singularity of the integrand at 0;

• in the PCM/S, the density for the RV [(αΓj2cT

)−1/α∧ηjξ1/αj ] in (5.14) for a greater

value of j requires more collocation points (d) to accurately approximate the

expectation of any functionals of [(αΓj2cT

)−1/α ∧ ηjξ1/αj ].

Within their own limitations (δ not too small, Qs not too large), the PCM/S achieves

higher accuracy than the PCM/CP, however it costs much more computational time

especially when the TαS process Lt contains more smaller jumps. For example,

when λ = 10 in Figure 5.3, to achieve the same accuracy of 10−11, the PCM/S with

Qs = 10 costs more than 100 times of CPU time than the PCM/CP with Qcp = 30

and δ = 1× 10−5.

Page 131: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

108

5.6.3 Comparing MC and PCM in CP approximation or se-

ries representation

Next we compare the accuracy and efficiency both between MC/CP and PCM/CP

and between MC/S and PCM/S to obtain the statistics of the solution of Equation

(5.1).

In Figure 5.4, we compare MC/CP and PCM/CP with the same δ (left), then

we compare MC/S and PCM/S (right). In the PCM/CP, we fix d (the number of

collocation points for each RV) and vary Qcp to obtain different numbers of sample

points s; in the PCM/S, we fix d and vary Qs to obtain different s. By equations

(5.31) and (5.32) we only have s = d(2Qcp + 1) instead of s = d2Qcp in the PCM/CP

and dQs instead of s = d3Qs in the PCM/S. However, we still plot the error versus

s = d2Qcp in the PCM/CP and versus s = d3Qs in the PCM/S to investigate the PCM

method in case the dimension of the random space cannot be reduced. With the

dimension reduction, PCM/CP and PCM/S outperform the convergence of MC/CP

and MC/S drastically; without the dimension reduction, the PCM/S seems to be

still more accurate than the MC/S, however the slope of convergence of PCM/CP

slows down for a larger s = d2Qcp . We also observe during the numerical experiment

that the error is clearly decreased when we increase Qs or Qcp but it is not as clear

when we increase d from 2 to 3.

Page 132: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

109

100 102 104 106 108 1010

10−4

10−3

10−2

10−1

s

l2u2(T=1)

MC/CP

PCM/CP, d=2, s=d2Qcp

PCM/CP, d=2, s=d(Qcp+1)

PCM/CP, d=3, s=d2Qcp

PCM/CP, d=3, s=d(Qcp+1)

100 105 1010 1015

10−5

10−4

10−3

10−2

10−1

s

l2u2(T=1)

MC/S, Qs=10

PCM/S, d=2, s=d3Qs

PCM/S, d=3, s=d3Qs

PCM/S, d=2, s=dQs

PCM/S, d=3, s=dQs

Figure 5.4: PCM vs. MC: error l2u2(T ) of the solution for Equation (5.1) versus the numberof samples s obtained by MC/CP and PCM/CP with δ = 0.01 (left) and MC/S with Qs = 10and PCM/S (right). T = 1 , c = 0.1, α = 0.5, λ = 1, ε = 0.1, µ = 2 (left and right). Spatialdiscretization: Nx = 500 Fourier collocation points on [0, 2] (left and right); temporal discretization:first-order Euler scheme in (5.22) with time steps 4t = 1× 10−5 (left and right). In both MC/CPand PCM/CP: RelTol = 1× 10−8 for integration in U(δ).

5.7 Simulation of 1D stochastic overdamped Langevin

equation driven by TαS white noises

In this section, we will present two methods to simulate the statistics for Equation

(5.2) by solving the corresponding generalized FP equation. In the first method,

we solve the density by approximating the TαS process Lt by a CP process, while

in the second method, we solve a TFPDE. We will compare these two FP equation

approaches with the previous MC and PCM methods via the empirical histograms

and errors of moments.

Page 133: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

110

5.7.1 Generalized FP equations for overdamped Langevin

equations with TαS white noises

It is known that for any overdamped Langevin equation with a Levy white noise ηt:

dx(t) = f(x(t), t)dt+ dηt(ω), x(0) = x0, (5.34)

the PDF of the solution P (x, t) satisfies the following generalized FP equation [38]:

∂tP (x, t) = − ∂

∂x

[f(x, t) P (x, t)

]+ F−1

Pk(t) lnSk

. (5.35)

Sk is the characteristic function (ch.f.) of the process ηt at time t = 1, as Sk =

E[e−ikη1 ]. We define the Fourier transformation for a function v(x) as Fv(x) =

vk =∫ +∞−∞ dxe−ikxv(x). Pk(t) is the ch.f. of x(t), as Pk(t) = E[e−ikx(t)]. The inverse

Fourier transformation is defined as F−1vk(x) = v = 12π

∫ +∞−∞ dxeikxvk(x).

By the CP approximation with the jump truncation size δ of the TαS process Lt

for Equation (5.2), the density Pcp(x, t) of the solution x(t) satisfies [38]:

∂tPcp(x, t) =

[σ − 2U(δ)

]Pcp(x, t) + σx

∂Pcp(x, t)

∂x+

∫ +∞

−∞dyPcp(x− y, t)

ce−λ|y|

|y|α+1

(5.36)

with the initial condition Pcp(x, 0) = δ(x − x0), where U(δ) is defined in Equation

(5.12).

We also obtain the generalized FP equations as TFPDE for the density Pts(x, t)

directly from Equation (5.35) without approximating Lt by a CP process for Equa-

tion (5.2). Due to the fact that when 0 < α < 1 and 1 < α < 2, the ch.f.s for L1, Sk,

are in different forms, the density Pts(x, t) satisfies different equations for each case.

Page 134: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

111

When 0 < α < 1, Sk = exp [−D(λ+ ik)α − λα] [35, 122], where D = cα

Γ(1−α),

Γ(t) =∫ +∞

0xt−1e−xdx, the density Pts(x, t) satisfies:

∂tPts(x, t) =

∂x

(σxPts(x, t)

)−D(α)

(−∞D

α,λx Pts(x, t) + xD

α,λ+∞Pts(x, t)

), 0 < α < 1

(5.37)

with the initial condition Pts(x, 0) = δ(x−x0). The left and right Riemann-Liouville

tempered fractional derivatives are defined as [11, 122]:

−∞Dα,λx f(x) = e−λx −∞D

αx [eλxf(x)]− λαf(x), 0 < α < 1, (5.38)

and

xDα,λ+∞f(x) = eλx xD

α+∞[e−λxf(x)]− λαf(x), 0 < α < 1. (5.39)

In the above definitions, for α ∈ (n − 1, n) and f(x) (n − 1)-times continuously

differentiable on (−∞,+∞), −∞Dαx and xD

α+∞ are left and right Riemann-Liouville

fractional derivatives defined as [11]:

−∞Dαxf(x) =

1

Γ(n− α)

dn

dxn

∫ x

−∞

f(ξ)

(x− ξ)α−n+1dξ, (5.40)

xDα+∞f(x) =

(−1)n

Γ(n− α)

dn

dxn

∫ +∞

x

f(ξ)

(ξ − x)α−n+1dξ. (5.41)

When 1 < α < 2, Sk = exp[D(λ + ik)α − λα − ikαλα−1] [35, 122], where

D(α) = cα(α−1)

Γ(2− α), the density Pts(x, t) satisfies:

∂tPts(x, t) =

∂x

(σxPts(x, t)

)+D(α)

(−∞D

α,λx Pts(x, t) + xD

α,λ+∞Pts(x, t)

), 1 < α < 2

(5.42)

with the initial condition Pts(x, 0) = δ(x−x0). The left and right Riemann-Liouville

Page 135: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

112

tempered fractional derivatives are defined as [11, 122]:

−∞Dα,λx f(x) = e−λx−∞D

αx [eλxf(x)]− λαf(x)− αλα−1f ′(x), 1 < α < 2, (5.43)

and

xDα,λ+∞f(x) = eλxxD

α+∞[e−λxf(x)]− λαf(x) + αλα−1f ′(x), 1 < α < 2. (5.44)

The left and right Riemann-Liouville fractional derivative −∞Dαx and xD

α+∞ can

be numerically implemented via the Grunwald-Letnikov finite difference form for

0 < α < 1 [121, 122, 143]:

−∞Dαxf(x) = limh→0

∑+∞j=0

1hαWjf(x− jh), 0 < α < 1;

xDα+∞f(x) = limh→0

∑+∞j=0

1hαWjf(x+ jh), 0 < α < 1.

(5.45)

Here, −∞Dαx and xD

α+∞ are implemented via the shifted Grunwald-Letnikov finite

difference form for 1 < α < 2 [122, 143]:

−∞Dαxf(x) = limh→0

∑+∞j=0

1hαWjf(x− (j − 1)h), 1 < α < 2;

xDα+∞f(x) = limh→0

∑+∞j=0

1hαWjf(x+ (j − 1)h), 1 < α < 2.

(5.46)

Note that Wk =

α

k

(−1)k = Γ(k−α)Γ(−α)Γ(k+1)

can be derived recursively via

W0 = 1,W1 = −α,Wk+1 = k−αk+1

Wk. In the following numerical experiments, we

will solve equations (5.37) and (5.42) by the aforementioned first-order numerical

fractional finite difference scheme for spatial discretization on a sufficiently large do-

main [−L,L] and fully-implicit scheme for temporal discretization with time step

∆t. Let us denote the approximated solution of Pts(xi, tn) as P ni . Let us denote by

Page 136: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

113

xi = 2LNxi − L = hi − L, i = 0, 1, ..., Nx, where h is the grid size. When 0 < α < 1,

we use the following fully-implicit discretization scheme for Equation (5.37):

P n+1i − P n

i

∆t=

(σ + 2D(α)λα

)P n+1i + σxi

P n+1i+1 − P n+1

i−1

2h

− D(α)

hαe−λxi

i∑j=0

Wjeλxi−jP n+1

i−j −D(α)

hαeλxi

Nx−i∑j=0

Wje−λxi+jhP n+1

i+j .

(5.47)

When 1 < α < 2, we use the following fully-implicit discretization scheme for Equa-

tion (5.42):

P n+1i − P n

i

∆t=

(σ − 2D(α)λα

)P n+1i + σxi

P n+1i+1 − P n+1

i−1

2h

+D(α)

hαe−λxi

i+1∑j=0

Wjeλxi−j+1P n+1

i−j+1 +D(α)

hαeλxi

Nx−i+1∑j=0

Wje−λxi+j−1P n+1

i+j−1.

(5.48)

In both the CP approximation and the series representation, we numerically ap-

proximate the initial condition by the delta sequences [4] either with sinc functions

5

δDn =sin(nπ(x− x0))

π(x− x0), lim

n→+∞

∫ +∞

−∞δDn (x)f(x)dx = f(0), (5.49)

or with Gaussian functions

δGn = exp(−n(x− x0)2), limn→+∞

∫ +∞

−∞δGn (x)f(x)dx = f(0). (5.50)

In Figure 5.5 we simulate the density evolution for the solution of Equation (5.2)

obtained from the TFPDEs (5.37) and (5.42). The peak of the density moves towards

smaller values of x(t) due to the −σx(t;ω)dt term. The noise dLt(ω) changes the

shape of the density.

5We approximate the initial condition by keeping the highest peak δDn in the center and settingthe value on the rest of domain to be zeros. After that we normalize the area under the peak to beone.

Page 137: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

114

0.5

1

1.5

2

−10

120

1

2

3

4

5

tx(t)

P ts(x,t)

Pts(x,t) t

x(t)

−0.50

0.51

1.52 0

0.20.4

0.60.8

1

0

2

4

6

8

10

12

t

x(t)

P ts(x,t)

Pts(x,t)

x(t)

t

Figure 5.5: Zoomed in density Pts(t, x) plots for the solution of Equation (5.2) at different timesobtained from solving Equation (5.37) for α = 0.5 (left) and Equation (5.42) for α = 1.5 (right):σ = 0.4, x0 = 1, c = 1, λ = 10 (left); σ = 0.1, x0 = 1, c = 0.01, λ = 0.01 (right). We haveNx = 2000equidistant spatial points on [−12, 12] (left); Nx = 2000 points on [−20, 20] (right). Time step is4t = 1 × 10−4 (left) and 4t = 1 × 10−5 (right). The initial conditions are approximated by δD20

(left and right).

The integral form of Equation (5.2) is given by:

x(t) = x0e−σt + e−σt

∫ t

0

eστdLτ . (5.51)

The mean and the second moment for the exact solution of Equation (5.2) are:

E[x(t)] = x0e−σt (5.52)

and

E[x2(t)] = x20e−2σt +

c

σ(1− e−2σt)

Γ(2− α)

λ2−α . (5.53)

Let us define the errors of the first and the second moments to be

err1st(t) =|E[xnum(t)]− E[xex(t)]|

|E[xex(t)]|, err2nd(t) =

|E[x2num(t)]− E[x2

ex(t)]||E[x2

ex(t)]|. (5.54)

Page 138: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

115

5.7.2 Simulating density by CP approximation

Let us simulate the density of solution x(t), Pcp(x, t), in Equation (5.2) by approx-

imating the TαS process Lt by a CP process (density/CP)∑∞

i=1 Jδi It≤Ti + bδt [35].

We solve Equation (5.36) for Pcp(x, t) via the second-order Runge-Kutta (RK2) for

temporal discretization and via Fourier collocation on a sufficiently large domain

[−L,L] with Nx equidistant points xi = −L + 2LNxi, i = 1, ..., Nx. For each xi we

simulate the integral in the last term∫ +∞−∞ dyPcp(xi − y, t) ce

−λ|y|

|y|α+1 via the trapezoid

rule taking y to be all the other points on the grid other than xi. We take δ = 2LNx

to include all the points on the Fourier collocation grid into this integration term.

We also simulate the moments for the solution of Equation (5.2) by PCM/CP.

Through the integral form (5.51) of the solution we directly sample the second mo-

ment of the solution by the following equation:

E[x2(t)] ≈ x20e−2σt + e−2σtE[(Jδ1 )2]

Qcp∑i=1

E[e2σTi ]. (5.55)

We generate d collocation points for each RVs (Jδ1 and Ti) in Equation (5.55) to

obtain the moments.

In Figure 5.6, we plot the errors err1st and err2nd versus time for 0 < α < 1

and 1 < α < 2 by the density/CP and PCM/CP with the same jump truncation

sizeδ. The error by the density/CP comes from: 1) neglecting the jumps smaller

than δ; 2) from evaluating∫ +∞−∞ dyPcp(x − y, t) ce

−λ|y|

|y|α+1 by the trapezoid rule; 3) from

numerical integration to calculate U(δ); 4) from the delta sequence approximation of

the initial condition. The error by the PCM/CP comes from: 1) the jump truncation

up to size δ; 2) the finite number Qcp terms we consider in the CP approximation; 3)

numerical integration for each E[(Jδ1 )2] and E[e2σTi ]; 4) the error from the long-term

Page 139: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

116

0.2 0.4 0.6 0.8 110−4

10−3

10−2

10−1

t

errors

err1st by density/CP

err2nd by density/CP

err2nd by PCM/CP d=2 Qcp=5

0.2 0.4 0.6 0.8 110−4

10−3

10−2

10−1

100

t

errors

err1st by density/CP

err2nd by density/CP

err2nd by PCM/CP d=2 Qcp=2

Figure 5.6: Density/CP vs. PCM/CP with the same δ: errors err1st and err2nd of the solutionfor Equation (5.2) versus time obtained by the density Equation (5.36) with CP approximation andPCM/CP in Equation (5.55). c = 0.5, α = 0.95, λ = 10, σ = 0.01, x0 = 1 (left); c = 0.01, α = 1.6,λ = 0.1, σ = 0.02, x0 = 1 (right). In the density/CP: RK2 with time steps 4t = 2 × 10−3, 1000Fourier collocation points on [−12, 12] in space, δ = 0.012, RelTol = 1× 10−8 for U(δ), and initialcondition as δD20 (left and right). In the PCM/CP: the same δ = 0.012 as in the density/CP.

integration in the generalized polynomial chaos (gPC) resulted from the fact that

only a finite number of polynomial modes is considered and the error accumulates

with respect to time (an error due to random frequencies) [181]. First, we observe

that the error growth with time from the PCM/CP is faster than the density/CP

for both plots in Figure 5.6. Then, we observe in Figure 5.6 that when Lt has more

larger jumps (λ = 0.1, right), the PCM/CP with only Qcp = 2 is more accurate than

the density/CP with the same δ = 0.012. (Larger values of Qcp maintains the same

level of accuracy with Qcp = 2 or 5 here because the error is mainly determined by

the choice of δ.)

5.7.3 Simulating density by TFPDEs

As an alternative method to simulate the density of solution for Equation (5.2), we

will simulate the density Pts(x, t) by solving the TFPDEs (5.37) for 0 < α < 1

and (5.42) for 1 < α < 2. The corresponding finite difference schemes are given in

Page 140: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

117

equations (5.47) and (5.48).

In Figure 5.7, we plot the errors for the second moments versus time both by the

PCM/CP and the TFPDEs. In the TFPDEs, we solve equations (5.37) and (5.42)

by the finite difference schemes given in equations (5.47) and (5.48). The error of

the TFPDEs mainly comes from: 1) approximating the initial condition by delta

sequences; 2) temporal or spatial errors from solving the equations (5.37) and (5.42).

In Figure 5.7 we experiment with λ = 10 (left, with less larger jumps) and

with λ = 1 (right, with more larger jumps). First, we observe that with the same

resolution for x(t) (Nx = 2000 on [−12, 12]) and temporal resolution (4t = 2.5 ×

10−5), the err2nd errors from the TFPDE method grow slower when λ = 1 than

λ = 10, because a more refined grid is required to resolve the behavior of more

smaller jumps (larger λ) between different values of x(t). Second, we observe that

the error from the PCM/CP grows slightly faster than the TFPDE method. In

PCM/CP, the error from the long-term integration is inevitable with a fixed number

of collocation points d. Third, without the dimension reduction in the PCM/CP (if

we compute it on d2Qcp points rather than d(Qcp + 1) points), the TFPDE consumes

much less CPU time than the PCM/CP with a higher accuracy.

In Figure 5.8, we plot the density Pst(x, t) obtained from the TFPDEs in equa-

tions (5.37) and (5.42) at two different final time values T and the empirical his-

tograms obtained from the MC/CP with the first-order Euler scheme

xn+1 − xn = −σxn4t+ (Ltn+1 − Ltn). (5.56)

Although we do not have the exact formula for the distribution of x(t), we observe

that the density from MC/CP matches the density from TFPDEs, indicated by the

Page 141: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

118

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 110−4

10−3

10−2

10−1

100

t

err2nd

TFPDE, N

x=2000

TFPDE, Nx=8000

PCM/CP, Qcp=50, b=1×10−5, d=2

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 110−4

10−3

10−2

10−1

100

t

err2nd

TFPDE, Nx=2000

TFPDE, Nx=6400

PCM/CP, Qcp=1, b=1×10−6, d=2

PCM/CP, Qcp=10, b=5×10−8, d=2

Figure 5.7: TFPDE vs. PCM/CP: error err2nd of the solution for Equation (5.2) versus timewith λ = 10 (left) and λ = 1 (right). Problems we are solving: α = 0.5, c = 2, σ = 0.1, x0 = 1(left and right). For PCM/CP: RelTol = 1 × 10−8 for U(δ) (left and right). For the TFPDE:finite difference scheme in (5.47) with 4t = 2.5× 10−5, Nx equidistant points on [−12, 12], initialcondition given by δD40 (left and right).

one-sample K-S test defined in Equation (5.17).

5.8 Conclusions

In this paper we first compared the CP approximation and the series representation

for a TαS by matching the empirical histogram of an inverse Gaussian subordinator

with its known distribution. The one-sample K-S test values indicated a better fitting

between the histogram and the distribution if we decreased the jump truncation size

δ in the CP approximation and increased the number of terms Qs in the series

representation. When the cost is limited (large δ, small Qs, the CP approximation,

the large jumps are better approximated by the CP approximation.

Next we simulated the moment statistics for stochastic reaction-diffusion equa-

tions with additive TαS white noises, via four stochastic simulation methods: MC/CP,

MC/S, PCM/CP, and PCM/S. First, in a comparison between the MC/CP and the

Page 142: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

119

−4 −2 0 2 4 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

x(T = 0.5)

dens

ity P

(x,t)

histogram by MC/CPdensity by TFPDE

KS = 0.017559

−4 −2 0 2 4 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

x(T=1)

dens

ity P

(x,t)

histogram by MC/CPdensity by TFPDE

KS = 0.015865

Figure 5.8: Zoomed in plots for the density Pts(x, T ) by solving the TFPDE (5.37) and theempirical histogram by MC/CP at T = 0.5 (left) and T = 1 (right): α = 0.5, c = 1, λ = 1,x0 = 1 and σ = 0.01 (left and right). In the MC/CP: sample size s = 105, 316 bins, δ = 0.01,RelTol = 1 × 10−8 for U(δ), time step 4t = 1 × 10−3 (left and right). In the TFPDE: finitedifference scheme given in (5.47) with 4t = 1 × 10−5 in time, Nx = 2000 equidistant points on[−12, 12] in space, and the initial conditions are approximated by δD40 (left and right). We performthe one-sample K-S tests here to test how two methods match.

MC/S, we observed that for almost the same accuracy, MC/CP costs less CPU time

than the MC/S. We also observed that in the MC/CP, decreasing δ was more ef-

fective in reducing the error when the tail of Levy measure of the TαS process was

more tempered. Second, in a comparison between the PCM/CP and the PCM/S,

we observed that for a smaller sample size the PCM/CP converged faster because

it captured the feature of larger jumps faster than the PCM/S, while for a larger

sample size the PCM/S converged faster than the PCM/CP. However, the conver-

gence of both PCM/CP and PCM/S slows down for higher accuracy due to the

limitations discussed in Section 3.2. We also introduced a dimension reduction in

the PCM/CP and the PCM/S for this problem in Section 3.2. Third, we compared

the efficiency between MC/CP and PCM/CP, and between the MC/S and PCM/S.

With the dimension reduction the PCM outperforms the efficiency of MC dramat-

ically in evaluating the moment statistics. Without the dimension reduction, the

PCM/S still outperforms the efficiency of MC/S for the same accuracy.

Subsequently, we simulated the stochastic overdamped Langevin equations with

Page 143: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

120

additive TαS white noises. We provided two different ways of simulating the gen-

eralized FP equations that describes the density of the solution: first we solved the

generalized FP equation as an integral equation by approximating the TαS process

as CP processes; then we solved the generalized FP equations as TFPDEs, in differ-

ent forms for 0 < α < 1 and 1 < α < 2. The integral equations served as a good

tool to predict the moment statistics in Section 4.2. We observed that the TFPDEs

provided more accurate moment statistics than the PCM/CP with much less com-

putational cost without dimensional reduction in the PCM/CP. We also observed

that the empirical histogram via MC/CP matches the PDF from the TFPDEs.

Finally, we want to point out that the four stochastic simulation methods (MC/CP,

MC/S, PCM/CP, PCM/S) and the simulation of the generalized FP equations are

not restricted to SPDEs with TαS processes, but they are applicable to SPDEs with

any Levy jump processes with known Levy measures.

Page 144: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

Chapter Six

Numerical methods for SPDEs

with additive multi-dimensional

Levy jump processes

Page 145: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

122

We develop both probabilistic and deterministic approaches for moment statistics of

parabolic stochastic partial differential equations (SPDEs) driven by multi-dimensional

infinite activity pure jump Levy processes. We considered the dependence structure

in the components of the Levy process by LePage’s series representation and Levy

copulas. We compare the convergence of moment statistics by the the probability

collocation method (PCM, probabilistic) with respect to the truncation in the series

representations and by the Monte Carlo (MC) method (probabilistic) with respect

to the number of samples. In the deterministic method, we derive and simulate

the Fokker-Planck (FP) equation for the joint probabilistic density function (PDF)

of the stochastic ordinary differential equation (SODE) system decomposed from

the SPDE. As an example, we simulate a stochastic diffusion equation and choose

the marginal process of the multi-dimensional Levy processes to be tempered α-

stable (TS) processes, where the joint PDF in the deterministic approach satisfies

a tempered fractional (TF) PDE. We compare the joint PDF of the SODE system

simulated from the FP equations with the empirical histograms simulated by MC.

We compare the moment statistics of the solution for the diffusion equation ob-

tained from the joint PDF by the FP equations with that from PCM. In moderate

dimension d = 10 (for 10-dimensional Levy jump processes), we use the analysis

of variance (ANVOA) decomposition to obtain marginal joint PDF of the SODE

system from the 10-dimensional FP equation, as far as moment statistics in lower

orders are concerned.

Page 146: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

123

6.1 Literature review of parameterized dependence

structure in multi-dimensional Gaussian pro-

cesses

A Levy process is comprised of three parts: a Gaussian process, a pure jump Levy

process, and a deterministic drift. Although in this chapter we would like to simulate

the UQ for SPDEs driven by multi-dimensional Levy jump processes with correlated

components, we would like first introduce the parameterized dependence structure

by copula for the Gaussian part. It is, indeed, well established in the history. Bivari-

ate distributions were reviewed by Mardia (1972) [119]. Multivariate distributions

were reviewed by Johson and Kotz (1972) [88] and Jensen [84]. For random vec-

tors with non-Gaussian marginal distributions, elliptically contoured distributions

were constructed by Fang (1997) [47] and multivariate hyperbolic distributions were

constructed by Barndoff-Nielsen (1987) [13]. Copulae are introduced to separate

the dependence structure of a multivariate random vector from the marginal dis-

tributions of its components [3, 126] by Sklar’s theorem [163] (1959). Eight one-

parameter families of copulae and more two-parameter families of copulae have been

constructed to describe the dependence structure for multivariate random vectors

(1993) [70, 71, 86, 87]. In a multi-dimensional Levy process, the Gaussian copula

for the Gaussian component can be recovered as a limit of the copula for this Levy

process [91].

Page 147: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

124

6.2 Literature review of generalized FP equations

The Fokker-Planck (FP) equations are established in explicit forms for SDEs driven

by Brownian motions [136]. The FP equation of a Levy flight in an external force

field (described by a Langevin equation) is a generalized fractional (in space) Fokker-

Planck (FFP) equation [52]. The FP equation of a continuous time random walk

(CTRW) with decoupled temporal and spatial memories is described as an FFP with

fractional derivatives (both in space and in time) [34]. Such CTRWs can describe the

self-similar dynamics of a particle in the vicinity of a surface with spatial and tempo-

ral invariances [190], however by simply replacing the integer-ordered derivatives by

fractional ones, the underlying stochastic process is not directly a Levy process. Also,

the FP equation for a Langevin equation driven by a stochastic ‘pulse’ in Levy distri-

butions acting at equally spaced time is also shown to be an FFP [188]. Alternatively,

the FFP can be derived from the conservation law and a generalized Fick’s law, where

the particle current is proportional to the fractional derivatives of the particle den-

sity [31]. However, explicit forms of FP equations for SDEs driven by non-Gaussian

Levy processes are only obtained in special cases such as nonlinear SODEs driven by

multiplicative or additive Levy stable noises [158]. In general, FP equations for non-

linear SDEs driven by multiplicative or additive Levy processes in both the Ito form

and the Marcus form are derived in terms of infinite series [168]. Some methods to

derive the generalized Fokker-Planck (FP) equation for Langevin equations driven by

Levy processes require finite moments of distributions [167]. However, the marginal

distributions of Levy flights do not have finite moments. Therefore, the derivation

of FP equations for Langevin equations driven by multi-dimensional additive Levy

flights is reconsidered by the Chapman-Kolmogorov equation for the Markovian pro-

cesses in the momentum space [53], relaxing the finite moments condition. The

generalized FP equations for Langevin equations driven by one-dimensional multi-

Page 148: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

125

plicative Levy noise is derived by Fourier transformations [38]. The generalized FP

equations, as fractional PDEs (in space), for one-dimensional Levy flights subject

to no force, constant force, and linear Hookean force in a Langevin equation (with

additive noise) are solved explicitly [85].

6.3 Notation

~Lt multi-dimensional Levy processes(c, λ, α) coefficients in tempered α-stable distributions (TαS)δij Dirac delta functionI indicator functionE expectationν Levy measureQ number of truncations in the series representationP probability density functions number of samples in Monte Carlo simulation (MC)d dimension of multi-dimensional Levy processesq number of quadrature points in probability collocation methods (PCM)

−∞Dαx left Riemann-Liouville fractional derivative

xDα+∞ right Riemann-Liouville fractional derivative

−∞Dα,λx left Riemann-Liouville tempered fractional derivative (TF)

xDα,λ+∞ right Riemann-Liouville tempered fractional derivativeκ effective dimension of analysis of variance expansion (ANOVA)Fτ Clayton family of copulas with parameter τF Levy copulaU tail integral for the Levy measureΓ gamma function

(γ1, γ2, γ3) parameters in the second order finite difference scheme for TF derivativesSij sensitivity index in ANOVA expansion

Page 149: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

126

6.4 Diffusion model driven by multi-dimensional

Levy jump process

We solve the following parabolic diffusion model driven by a d-dimensional pure

jump Levy white noise ~L(t;ω) by probabilistic simulation methods (MC and PCM)

and a deterministic method (generalized FP equations):

du(t, x;ω) = µ∂

2u∂x2dt+

∑di=1 fi(x)dLi(t;ω), x ∈ [0, 1]

u(t, 0) = u(t, 1) = 0 boundary condition

u(0, x) = u0(x) initial condition,

(6.1)

where the components of ~L(t;ω), Li(t;ω), i = 1, ..., d, are mutually dependent and

have infinite activities [35]. The richness in the diversity of dependence structures

between components of ~L(t;ω) and the dynamics of the jumps for each component

allow us to study enough nontrivial small time behavior, therefore a Brownian motion

component is not necessary in this infinite activity model [75]. fi(x), i = 1, 2, ...

is a set of orthonormal basis functions on [0, 1], such that∫ 1

0fi(x)fj(x)dx = δij

1.

Let us take fk(x) =√

2sin(πkx), x ∈ [0, 1], k = 1, 2, 3, ... The solution for Equation

(6.1) exists and is unique [2]. Parabolic SPDEs driven by white noises was initially

introduced in a stochastic model of neural response [179]. The weak solutions of

Equation (6.1) were defined, and their existence, uniqueness and regularity were

studied [180]. Malliavin calculus was developed to study the absolute continuity of

the solution for parabolic SPDEs driven by white noises such as Equation (6.1) [12,

140].

We expand the solution of Equation (6.1) by the same set of basis fi(x), i =

1δij is the Dirac delta function.

Page 150: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

127

1, 2, ... as in the noise as

u(x, t;ω) =+∞∑i=1

ui(t;ω)fi(x). (6.2)

We define the inner product of two integrable functions f(x) and g(x) on [0, 1] to be

< f(x)g(x) >=

∫ 1

0

f(x)g(x)dx. (6.3)

Then by performing a Galerkin projection [111]

< u(t, x;ω)fi(x) >=

∫ 1

0

u(t, x;ω)fi(x)dx = ui(t;ω) (6.4)

of Equation (6.1) onto fi(x), i = 1, 2, ..., we have a linear system of SODEs:

du1(t) = µD11u1(t)dt+ dL1,

du2(t) = µD22u2(t)dt+ dL2,

...

duk(t) = µDkkuk(t)dt+ dLk,

...,

(6.5)

where the coefficient Dnm is defined as:

Dnm =<d2fmdx2

fn >= −(πm)2δmn. (6.6)

We briefly denote Equation (6.37) as a vector form:

d~u = ~C(~u, t) + d~L(t), (6.7)

where ~C is a linear functional.

Page 151: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

128

6.5 Simulating multi-dimensional Levy pure jump

processes

Although one-dimensional jump models are constructed in finance with Levy pro-

cesses [16, 98, 112], many financial models require multi-dimensional Levy jump pro-

cesses with dependent components [35], such as basket option pricing [106], portfolio

optimization [43], and risk scenarios for portfolios [35]. In history, multi-dimensional

Gaussian models are widely applied in finance because of the simplicity in description

of dependence structures [148], however in some applications we must take jumps

in price processes into account [29, 30]. We summarize the applications in Figure

6.1. In general, the increments of a multi-dimensional Levy jump process does not

Figure 6.1: An illustration of the applications of multi-dimensional Levy jump models in mathe-matical finance.

have a closed form. Therefore, there are, in general, three approximation methods

Page 152: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

129

to simulate a multi-dimensional Levy jump process ~L(t) as shown in Figure 6.2: 1. a

radial decomposition of the Levy measure ν by LePage’s series representation [104];

2. subordinating a multi-dimensional Brownian motion by a one-dimensional sub-

ordinator [157]; 3. the Levy copula [91]. In this paper, we experiment with the

first and the third methods, for the second method only describes a narrow range of

dependence structures [35].

Figure 6.2: Three ways to correlate Levy pure jump processes.

6.5.1 LePage’s series representation with radial decomposi-

tion of Levy measure

LePage’s series representation [35, 104] of multi-dimensional Levy process allows us

to specify the distributions of the size and of the direction separately for jumps. Let

us consider the following Levy measure ν in Rd with a radial decomposition [35]:

ν(A) =

∫Sd−1

p(d~θ)

∫ +∞

0

IA(r~θ)σ(dr, ~θ), for A ⊂ Rd, (6.8)

Page 153: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

130

where p is a probability measure on the unit sphere Sd−1 in Rd (for the direction

of jumps) and σ(·, ~θ) is a measure on (0,+∞) for each fixed ~θ ∈ Sd−1 (for the size

of jumps). IA is an indicator function of a set A. Let us consider a d-dimensional

TS processes with parameters (c, α, λ) [35, 156] and a Levy measure in the radial

decomposition given in Equation (6.8)2:

νrθ(dr, d~θ) = σ(dr, ~θ)p(d~θ) =ce−λrdr

r1+α

2πd/2d~θ

Γ(d/2), r ∈ [0,+∞], ~θ ∈ Sd. (6.9)

With the LePage’s series representation for jump processes with a Levy measure

as Equation (6.8) and the representation of TS distributions by random variables

(RVs) [153, 154, 155, 156], a TS jump process in Rd with a Levy measure given in

Equation (6.9) can be represented as the following:

~L(t) =+∞∑j=1

(εj[(

αΓj2cT

)−1/α ∧ ηjξ1/αj ]

)(θj1, θj2, ..., θjd)IUj≤t, for t ∈ [0, T ]. (6.10)

In Equation (6.10), εj, ηj, Uj, and ξj are sequences of i.i.d. RVs such that

P(εj = 0, 1) = 1/2, ηj ∼ Exponential(λ), Uj ∼Uniform(0, T ), and ξj ∼Uniform(0, 1).

Let Γj be the arrival times in a Poisson process with unit rate. (θj1, θj2, ..., θjd)

is a random vector uniformly distributed on the unit sphere Sd−1. This can be

simulated by generating d independent Gaussian RVs (G1, G2, ..., Gd) with N(0, 1)

distributions [125]:

(θj1, θj2, ..., θjd) =1√

G21 +G2

2 + ...+G2d

(G1, G2, ..., Gd). (6.11)

2 Γ(d/2)2πd/2

is the surface area of the unit sphere Sd−1 in Rd.

Page 154: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

131

6.5.2 Series representation with Levy copula

As an alternative way of describing the dependence structure between components,

the Levy measure ν of an Rd-valued Levy jump process ~L(t) is uniquely determined

by the marginal tail integrals and the Levy copula [35, 91]. As an example, let us

consider a bivariate TS Clayton process. This can be generalized into Rd [75]. The

dependence structure between two components in each corner (++,−+,−−,+−) is

described by the following Clayton family of copulas with a parameter τ 3 [91]

Fτ (u, v) = (u−τ + v−τ )−1/τ , u, v, τ > 0. (6.12)

We construct the Levy copula including the four corners to be [35]

F (x1, x2) = F++(1

2|x1|,

1

2|x2|)Ix1≥0,x2≥0 + F−−(

1

2|x1|,

1

2|x2|)Ix1≤0,x2≤0

− F+−(1

2|x1|,

1

2|x2|)Ix1≥0,x2≤0 − F−+(

1

2|x1|,

1

2|x2|)Ix1≤0,x2≥0

, (6.13)

where F++ = F−+ = F+− = F−− = Fτ . Let us take the marginal Levy measure of

components L1 and L2 to be TαS processes with Levy measure:

νL+1

(x) = νL−1 (x) = νL+2

(x) = νL−2 (x) =ce−λ|x|

|x|1+α, (6.14)

where L+1 denotes the positive jump part of component L1. L1 = L+

1 − L−1 and

L2 = L+2 −L−2 . We consider the independent subordinators (L++

1 , L++2 ), (L+−

1 , L+−2 ),

(L−+1 , L−+

2 ), and (L−−1 , L−−2 ) on each corners (++,+−,−+,−−) separately, where

L+1 = L++

1 + L+−1 , L−1 = L−+

1 + L−−1 , L+2 = L++

2 + L−+2 , L−2 = L+−

2 + L−−2 .

(6.15)

3When τ →∞, the two components are completely dependent; when τ → 0, they are indepen-dent.

Page 155: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

132

νL+1

(x) is the Levy measure for the 1D subordinator L+1 . Therefore, the two-dimensional

Levy measure in four corners of R2 is

ν(x1, x2) = ν++(x1, x2) + ν+−(x1, x2) + ν−+(x1, x2) + ν−−(x1, x2), (6.16)

where ν++1 (x1) and ν++

2 (x2) are Levy measures in the ++ corner

ν++1 (x1) =

1

2

ce−λx1

x1+α1

dx1Ix1≥0, ν++2 (x2) =

1

2

ce−λx2

x1+α2

dx2Ix2≥0. (6.17)

Therefore, the tail integrals U++1 and U++

2 in the ++ corner are

U++1 (x) =

∫ +∞

x

dx11

2

ce−λx1

x1+α1

, U++2 (x) =

∫ +∞

x

dx21

2

ce−λx2

x1+α2

.(6.18)

The tail integrals in the four corners are related to the Levy copulas on the four

corners by:

U++(x, y) = F++(1

2U+

1 (x),1

2U+

2 (x)), x ≥ 0, y ≥ 0, (6.19)

U−−(x, y) = F++(1

2U−1 (x),

1

2U−2 (x)), x ≤ 0, y ≤ 0, (6.20)

U+−(x, y) = −F+−(1

2U+

1 (x),1

2U−2 (x)), x ≥ 0, y ≤ 0, (6.21)

U−+(x, y) = −F−+(1

2U−1 (x),

1

2U+

2 (x)), x ≤ 0, y ≥ 0. (6.22)

The tail integrals are related to the two-dimensional Levy measure ν for (L1, L2)

as:

(+) U++(x, y) = ν([x,∞)× [y,∞)), x ≥ 0, y ≥ 0, (6.23)

(+) U−−(x, y) = ν((−∞, x]× (−∞, y]), x ≤ 0, y ≤ 0, (6.24)

(−) U+−(x, y) = −ν([x,∞)× (−∞, y]), x ≥ 0, y ≤ 0, (6.25)

Page 156: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

133

(−) U−+(x, y) = −ν((−∞, x]× (−∞, y]), x ≤ 0, y ≥ 0. (6.26)

The Levy measure in the ++ corner can be calculated by

ν++(x1, x2) =∂2F++(y1, y2)

∂y1∂y2

∣∣∣∣y1=U++

1 (x1),y2=U++2 (x2)

ν++1 (x1)ν++

2 (x2). (6.27)

By the symmetry assumption in Equation (6.14) we have

ν+−(x1, x2) = ν++(x1,−x2)

ν−+(x1, x2) = ν++(−x1, x2)

ν−−(x1, x2) = ν++(−x1,−x2).(6.28)

We can repeat the same procedure from Equation (6.17) to Equation (6.27) to cal-

culate the Levy measure in other three corners (+−,−−,−+). F++ in Equation

(6.27) is given by the Clayton copula in Equation (6.12) with correlation length τ ,

therefore:

∂2F++(x1, x2)

∂x1∂x2

=(1 + τ)x−1+τ

1 x−1+τ2 (x−τ1 + x−τ2 )−1/τ

(xτ1 + xτ2)2. (6.29)

Let us visualize the Levy measure (with c = 0.1, α = 0.5, λ = 5 with different

θ) in Figure 6.3 (on the four corners) and Figure 6.4 (only on the ++ corner). We

observe from Figure 6.4 that when τ (the correlation length) in the Clayton copula is

larger, the peak of the Levy measure lies more and more on a line (therefore the two

components are more and more correlated in jumps, as we see in Figure 6.6 below).

Notice:

Page 157: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

134

−0.5

0

0.5

−0.5

0

0.50

0.5

1

e=0.1

−0.50

0.5

−0.5

0

0.50

500

1000

e=1

−0.5

0

0.5

−0.5

0

0.50

5000

10000

e=10

−0.5

0

0.5

−0.5

0

0.50

5

10

x 104

e=100

c=0.1, _=0.5,h=5Levy measure

Figure 6.3: The Levy measures of bivariate tempered stable Clayton processes with differentdependence strength (described by the correlation length τ) between their L1 and L2 components.

Page 158: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

135

x1

x 2

e=0.1

0.02 0.04 0.06 0.08 0.10.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

x1

x 2

e=1

0.02 0.04 0.06 0.08 0.10.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

x1

x 2

e=10

0.01 0.02 0.03 0.04 0.050.01

0.015

0.02

0.025

0.03

0.035

0.04

0.045

0.05

x1

x 2

e=100

0.01 0.015 0.02 0.025 0.030.01

0.015

0.02

0.025

0.03

Levy measure of 2D tempered stable Clayton process with c=0.1, _=0.5, h=5 w/ different dependence strength

Figure 6.4: The Levy measures of bivariate tempered stable Clayton processes with differentdependence strength (described by the correlation length τ) between their L++

1 and L++2 compo-

nents (only in the ++ corner). It shows how the dependence structure changes with respect to theparameter τ in the Clayton family of copulas.

Page 159: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

136

• Although for one Levy measure ν in two dimensions, the tail integrals

U++, U+−, U−+, U−− are not unique, here we start from the tail integrals,

the Levy measure ν is unique

• The factor 12

in equation (6.13) is a result from the restraints of a function

being a Levy copula, such as for x > 0, F (x,∞) − F (x,−∞) = x, therefore

they must be added up to 1.

• In practice, just generate the two-dimensional Levy measure on the ++ corner,

and change the signs of variables, to avoid confusion of signs.

• When you derive equations for the joint PDF, you will need the ++ corner

Levy measure).

There are two series representations of this bivariate TS Clayton process in the

++ corner4 as a surborinator (L++1 (t), L++

2 (t)).

In the first kind, the RVs are not completely independent [35], for t ∈ [0, T ]:

L++1 (t) =

+∞∑j=1

U++(−1)(Γj)I[0,t](Vj),

L++2 (t) =

+∞∑j=1

U++(−1)2 (F−1(Wj|Γj))I[0,t](Vj),

(6.30)

where F−1 is defined as

F−1(v2|v1) = v1

(v− τ

1+τ

2 − 1

)−1/τ

. (6.31)

Vi ∼Uniform(0, 1) and Wi ∼Uniform(0, 1). Γi is the i-th arrival time for a

Poisson process with unit rate. Vi, Wi and Γi are independent. It converges

4The process ~L(t) on other three corners can be treated as subordinators as well. They can becalculated in the same way from the Levy copula as in the ++ corner.

Page 160: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

137

almost surely and uniformly on t ∈ [0, T ] [35].

Notice:

1. In L++1 (t) the jumps are truncated in a descending order (in size) (but L++

2 (t)

might not be);

2. In this series representation, it does not include the completely independent case

(we cannot take τ to be 0);

3. This representation converges almost surely and uniformly on s ∈ [0, 1].

Flaw of this representation:

The jump size RVs U++(−1)(Γj) in the L++1 (t) component are not independent, there-

fore U++(−1)2 (F−1(Wj|Γj)) are not independent as well. You cannot reduce the dimen-

sionality of PCM by this representation, although you may use this representation

for MC/S.

In the second kind, we replace the L++1 (t) by series representation in Equation

(6.10) when d = 1, by replacing the RVs for the size of jumps U++(−1)(Γj) by[(

αΓj2(c/2)T

)−1/α ∧ ηjξ1/αj

]as we know it has a TS distribution [153, 154, 155, 156]: for

t ∈ [0, T ],

L++1 (t) =

+∞∑j=1

ε1j

((

αΓj2(c/2)T

)−1/α ∧ ηjξ1/αj

)I[0,t](Vj),

L++2 (t) =

+∞∑j=1

ε2jU++(−1)2

(H−1(Wi

∣∣∣∣U++1 (

αΓj2(c/2)T

)−1/α ∧ ηjξ1/αj )

)I[0,t](Vj),

(6.32)

where ε1j, ε2j, ηj, Uj, and ξj are sequences of i.i.d. RVs such that

P(ε1j = 0, 1) = P(ε2j = 0, 1) = 1/2, ηj ∼ Exponential(λ), Vj ∼ Uniform(0, T ),

and ξj ∼Uniform(0, 1). Let Γj be the arrival times in a Poisson process with unit

rate. The PDF of [(αΓj

2(c/2)T)−1/α ∧ ηjξ1/α

j ] for a fixed j has an explicit form, given in

Chapter 5.

Page 161: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

138

In Figure 6.5, we plot the two components of one sample path of a bivariate

process (L++1 (t), L++

2 (t)) described by series representation in Equation (6.32). We

observe that the jumps between two components become more and more simultane-

ous when τ (correlation length) increases.

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

time t

x1(t)

or x

2(t)

e=1

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

time t

x1(t)

or x

2(t)

e=10

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

time t

x1(t)

or x

2(t)

e=100

x1

x2

x1

x2

x1

x2

CPU time = 15 s CPU time = 11 s

CPU time = 12 s

marginal tempered stable distributioc=1, _=0.5, h=5time step for sample paths: 1e−2# of truncations: Q=20

interval of tail integral: up to 1e−8 for the levy measure

trapezoid rule for tail integral:1001 points

Figure 6.5: trajectory of component L++1 (t) (in blue) and L++

2 (t) (in green) that are dependentdescribed by Clayton copula with dependent structure parameter τ . Observe how trajectories getmore similar when τ increases.

By specifying the size and the arrival time of jumps separately, both series repre-

sentations in Equations (6.30) and (6.32) for ~L(t) in the ++ corner can be denoted

by, as a subordinator:

L++1 (s) ≈

Q∑j=1

J++1j I[0,s](Vj), L++

2 (s) ≈Q∑j=1

J++2j I[0,s](Vj), s ∈ [0, T ], (6.33)

where Q is the number of truncations in the sum. We treat the four corners (++,

Page 162: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

139

−+, −−, +−) of ~L(t) = (L1(t), L2(t)) separately by series representations (6.30) or

(6.32) for subordinators as Equation (6.33) and combine them as: for t ∈ [0, T ],

L1(t) ≈Q∑j=1

[J++

1j I[0,t](V++j )− J−+

1j I[0,t](V−+j )− J−−1j I[0,t](V

−−j ) + J+−

1j I[0,t](V+−j )

],

L2(t) ≈Q∑j=1

[J++

2j I[0,t](V++j ) + J−+

2j I[0,t](V−+j )− J−−2j I[0,t](V

−−j )− J+−

2j I[0,t](V+−j )

].

(6.34)

We show sample paths of a bivariate Clayton Levy jump process (where the de-

pendence between the two components is described by the Clayton family of Levy

copulas with correlation length τ) by considering all the four corners in Figure 6.6.

We observe that when τ is larger (it means that the subordinators on all the four

corners have stronger correlation in jumps between the two components), the two

components either jump together with the same size and sign or jump together with

the opposite sign but the same size.

We can also visualize the sample paths on the (L1, L2) plane as in Figure 6.7

with respect to different correlation length τ in the Clayton copula. We observe

that when the dependence is stronger (τ is large), the paths are more likely to go

in a square bc there are equal chances to have the same or opposite jumps between

component

In Figure 6.8, we summarize the procedure of deriving the Levy measure of a

multi-dimensional Levy process by constructing the dependence between components

by Levy copula.

Page 163: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

140

0 0.1 0.2 0.3 0.40

0.05

0.1

0.15

time t

l1(t)

or l

2(t)

e=1

0 0.1 0.2 0.3 0.40

0.2

0.4

0.6

0.8

1

time t

l1(t)

or l

2(t)

e=10

0 0.1 0.2 0.3 0.4−0.2

−0.1

0

0.1

0.2

0.3

time t

l1(t)

or l

2(t)

e=100

0 0.1 0.2 0.3 0.4−0.15

−0.1

−0.05

0

0.05

0.1

time t

l1(t)

or l

2(t)

e=1000

0 0.1 0.2 0.3 0.4−0.2

−0.1

0

0.1

0.2

0.3

time t

l1(t)

or l

2(t)

e=10000

0 0.1 0.2 0.3 0.4−0.1

−0.05

0

0.05

0.1

time t

l1(t)

or l

2(t)

e=100000

c1=c2=c3=c4=1/2in Levy copula

c=1,_=0.5,h=5Q=20, dt=1e−2CPU time 10s (decreases w.r.t. e)

Figure 6.6: Sample path of (L1, L2) with marginal Levy measure given by equation (6.14), Levycopula given by (6.13), with each components such as F++ given by Clayton copula with parameterτ . Observe that when τ is bigger, the ’flipping’ motion happens more symmetrically, because thereis equal chance for jumps to be the same sign with the same size, and for jumps to be the oppositesigns with the same size.

Page 164: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

141

−0.1 −0.05 0 0.05 0.1−0.06

−0.04

−0.02

0

0.02

0.04

0.06

0.08e=1

x1

x2

−0.2 0 0.2 0.4 0.6 0.8−0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

x1

e=10

x2

−0.02 −0.01 0 0.01 0.02 0.03 0.04−0.03

−0.02

−0.01

0

0.01

0.02e=100

x1

x2

−0.04 −0.03 −0.02 −0.01 0 0.01−0.01

−0.005

0

0.005

0.01

0.015

0.02

x1

e=1000

x2

c=0.1,_=0.5,h=5,Q=2,dt=0.01,time 0 to 1, sample paths in 2D, tankov’s series rep

Figure 6.7: Sample paths of bivariate tempered stable Clayton Levy jump processes (L1, L2)simulated by the series representation given in Equation (6.30). We simulate two sample paths foreach value of τ .

Figure 6.8: An illustration of the three methods used in this paper to solve the moment statisticsof Equation (6.1).

Page 165: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

142

6.6 Generalize FP equation for SODEs with cor-

related Levy jump processes and ANOVA de-

composition of joint PDF

• It is proved that It proved for the following SODE system in the Ito’s sense (it

means the multiplication in front of Lt is defined in the Ito’s integral):

dXt = f(Xt, t)dt+ σ(Xt− , t)dLt, X0 = x, Lt ∈ Rd. (6.35)

• Notice in Equation (6.35):

– f(Xt, t)dt can be nonlinear

– σ(Xt− , t)dLt: the multiplicative noises are considered here

– The Levy process Lt has three parts : dLt = bdt+dBt+∫||y||<1

yN(dt, dy)+∫||y||>1

yN(dt, dy), where N is the compensated Poisson random measure

(the random measure minus the drift).

– The triplet characterizing Lt is (b, A, ν), A is covariance matrix for the

Gaussian part of Lt, b is drift, and ν is the Levy measure.

• The conclusion from this paper is that the Fokker-Planck equation will satisfy

the following equation when σ(x, t) = 1:

∂p

∂t= − ∂

∂x

(f(x, t)p(x, t)

)− b ∂

∂x

(σ(x, t)p(x, t)

)+

1

2A∂2

∂x2

(σ2(x, t)p(x, t)

)+

∫Rd/0

[p(x− y, t)− p(x, t) + I(−1,1)d(y)y

∂x(σ(x, t)p(x, t))

]ν(dy)

,

(6.36)

Page 166: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

143

• Our SODE system is:

du1(t) = µD11u1(t)dt+ dL1,

du2(t) = µD22u2(t)dt+ dL2,

...

duk(t) = µDkkuk(t)dt+ dLk,

....

(6.37)

• Notice:

– A = 0 : we do not have a Gaussian part in the Levy (pure jump process),

therefore +12A ∂2

∂x2

(σ2(x, t)p(x, t)

)= 0

– σ(x, t) = 1 : we have additive noise in the SODE system,∑∞k=0

(−y)k

k!∂k

∂xk(σk(x, t)p(x, t)) = p(x, t)

– f(x, t) is a linear operator in our SODE system

– b = 0 : we are dealing with pure jump processes without a drift ,

therefore b ∂∂x

(σ(x, t)p(x, t)

)= 0

– the Levy measure in our paper has the mentioned symmetry that ν(x) =

ν(−x),

therefore∫Rd/0 I(−1,1)d(y)y ∂

∂x(σ(x, t)p(x, t))ν(dy) = ∂p(x,t)

∂x

∫Rd/0 I(−1,1)d(y)yν(dy) =

0

• Therefore it reduced to the same FP equation we had in our paper

∂p

∂t= − ∂

∂x

(f(x, t)p(x, t)

)+

∫Rd/0

[p(x− y, t)− p(x, t)

]ν(dy). (6.38)

since ν(x) = ν(−x) it can be written as

∂p

∂t= − ∂

∂x

(f(x, t)p(x, t)

)+

∫Rd/0

[p(x+ y, t)− p(x, t)

]ν(dy), (6.39)

Page 167: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

144

as well .

The generalized FP equation for the joint PDF of solutions in the SODE system

(6.7) is :

∂P (~u, t)

∂t= −∇ · (~C(~u, t)P (~u, t)) +

∫Rd

ν(d~z)

[P (~u+ ~z, t)− P (~u, t)

]. (6.40)

When the Levy measure of ~L(t) in Equation (6.7) is given by Equation (6.9),

the joint PDF of solutions ~u(t) ∈ Rd for the SODE system satisfies the following

tempered fractional PDE (TFPDE):

when 0 < α < 1,

∂P (~u, t)

∂t= −

d∑i=1

[µDii(P + ui

∂P

∂ui)

]− c

αΓ(1− α)

∫Sd−1

Γ(d/2)dσ(~θ)

2πd/2

[rD

α,λ+∞P (~u+ r~θ, t)

]∣∣∣∣r=0

.

(6.41)

Γ(x) is the Gamma function and ~θ is a unit vector on the unit sphere Sd−1. xDα,λ+∞

is the right Riemann-Liouville tempered fractional derivative [11, 122]:

xDα,λ+∞g(x) = eλx xD

α+∞[e−λxg(x)]− λαg(x), for 0 < α < 1; (6.42)

xDα,λ+∞g(x) = eλxxD

α+∞[e−λxg(x)]− λαg(x) + αλα−1g′(x), for 1 < α < 2. (6.43)

xDα+∞ is the right Riemann-Liouville fractional derivative [11, 122]: for α ∈ (n−1, n)

and g(x) (n− 1)-times continuously differentiable on (−∞,+∞),

xDα+∞g(x) =

(−1)n

Γ(n− α)

dn

dxn

∫ +∞

x

g(ξ)

(ξ − x)α−n+1dξ. (6.44)

Equation (6.40) for the joint PDF P (~u, t) is a PDE on a d-dimensional domain (it can

be high-dimensional), however as far as the first and the second moments of Equation

Page 168: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

145

(6.1) are concerned, we only need the marginal distributions pi(ui, t) and pij(ui, uj, t)

for i, j = 1, ..., d. The equations that are satisfied by pi(ui, t) and pij(ui, uj, t) can be

derived from the unanchored analysis of variance (ANOVA) decomposition [23, 55,

67]:

P (~u, t) ≈ P0(t) +∑

1≤j1≤d

Pj1(uj1 , t) +∑

1≤j1<j2≤d

Pj1,j2(uj1 , uj2 , t) + ...

...+∑

1≤j1<j2...<jκ≤d

Pj1,j2,...,jκ(uj1 , uj2 , ..., uκ, t)

(6.45)

where 5 [187]

P0(t) =

∫Rd

P (~u, t)d~u, (6.46)

Pi(ui, t) =

∫Rd−1

du1...dui−1dui+1...dudP (~u, t)− P0(t) = pi(ui, t)− P0(t), (6.47)

and

Pij(xi, xj, t) =

∫Rd−1

du1...dui−1dui+1...duj−1duj+1...dudP (~u, t)

− Pi(ui, t)− Pj(uj, t)− P0(t) = pij(x1, x2, t)− pi(x1, t)− pj(x2, t) + P0(t).

(6.48)

κ is called truncation or effective dimension [187]. By the linearity of Equation

(6.40) and the ANOVA decomposition in Equation (6.45), the marginal distribution

pi(ui, t) and pij(ui, uj, t) (when 0 < α < 1) satisfy:

∂pi(ui, t)

∂t= −

( d∑k=1

µDkk

)pi(xi, t)− µDiixi

∂pi(xi, t)

∂xi

− cΓ(1− α)

α

(Γ(d

2)

2πd2

2πd−12

Γ(d−12

)

)∫ π

0

dφsin(d−2)(φ)

[rD

α,λ+∞pi(ui + rcos(φ), t)

]∣∣∣∣r=0

,

(6.49)

5We choose the Lebesgue measure in the unanchored ANOVA to be the uniform measure.

Page 169: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

146

and

∂pij(ui, uj, t)

∂t= −

( d∑k=1

µDkk

)pij − µDiiui

∂pij∂ui− µDjjuj

∂pij∂uj− cΓ(1− α)

α

(Γ(d

2)

2πd2

2πd−22

Γ(d−22

)

)∫ π

0

dφ1

∫ π

0

dφ2sin8(φ1)sin7(φ2)

[rD

α,λ+∞pij(ui + rcosφ1, uj + rsinφ1cosφ2, t)

]∣∣∣∣r=0

.

(6.50)

For 1 < α < 2, replace the coefficient − cΓ(1−α)α

in Equations (6.41), (6.49) and (6.50)

by + cΓ(2−α)α(α−1)

.

Here we discuss how to reduce the (d − 1)-dimensional integration in Equation

(6.41) in to lower-dimensional integrations as in Equations (6.49) and (6.50). The

d-dimensional spherical coordinate system is described by (~x ∈ Rd and ~x = rθ)

x1 = rcos(φ1), x2 = rsin(φ1)cos(φ2), x3 = rsin(φ1)sin(φ2)cos(φ3),

..., xd−1 = rsin(φ1)...sin(φd−2)cos(φd−1), xd = rsin(φ1)...sin(φd−2)sin(φd−1),

(6.51)

where φ1...φd−2 ∈ [0, π] and φd−1 ∈ [0, 2π].

By the plugging the ANOVA decomposition (6.45) into the generalized FP Equa-

tion (6.41) and the d-dimensional spherical coordinate system (6.51), we have, for

the marginal distributions pi(ui, t) (for 0 < α < 1):

∂pi(ui, t)

∂t= −

( d∑i=1

µDii

)pi(ui, t)− µDiiui

∂pi(ui, t)

∂ui

− cΓ(1− α)

α

∫Sd−1

dθΓ(d/2)

2πd/2

∫ +∞

0

ce−λr

r1+αdr[pi(ui + rcos(φ1), t)− pi(ui, t)]

= −( d=10∑

i=1

µDii

)pi(ui, t)− µDiiui

∂pi(ui, t)

∂ui+

∫ π

0

dφ1

∫ π

0

dφ2...

∫ π

0

dφd−2

∫ 2π

0

dφd−1∫ +∞

0

dr

[rd−1sind−2(φ1)sind−3(φ2)..sin(φ8)

dθΓ(d/2)

2πd/2ce−λr

r1+α(pi(ui + rcos(φ1), t)− pi(ui, t))

].

(6.52)

Page 170: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

147

By integrating out φ2, ..., φd−1, we obtain Equation (6.49).

Similarly for pij(ui, uj, t), we have from Equation (6.41) that (for 0 < α < 1):

∂pij(ui, uj, t)

∂t= −

( d∑i=1

µDii

)pij(ui, uj, t)− µDiiui

∂pij(ui, uj, t)

∂ui− µDjjuj

∂pij(ui, uj, t)

∂uj

− cΓ(1− α)

α

∫Sd−1

dθΓ(d/2)

2πd/2

∫ +∞

0

ce−λr

r1+αdr[

pij(ui + rcos(φ1), uj + rsin(φ1)cos(φ2), t)− pij(ui, uj, t)]

= −( d∑

i=1

µDii

)pij(ui, uj, t)− µDiiui

∂pij(ui, uj, t)

∂ui− µDjjuj

∂pij(ui, uj, t)

∂uj

+

∫ π

0

dφ1

∫ π

0

dφ2...

∫ π

0

dφd−2

∫ 2π

0

dφd−1

∫ +∞

0

dr

[rd−1sind−2(φ1)sind−3(φ2)..sin(φ8)

dθΓ(d/2)

2πd/2

ce−λr

r1+α

(pij(ui + rcos(φ1), uj + rsin(φ1)cos(φ2), t)− pij(ui, uj, t)

)].

(6.53)

By integrating out φ3, ..., φd−1, we obtain Equation (6.50).

We use a second-order finite difference (FD) scheme [105] to compute the tem-

pered fractional derivative xDα,λ+∞ with parameters (γ1, γ2, γ3) for a function g(x)

when 0 < α < 1:

xDα,λ+∞g(x) =

γ1

[ 1−xh

]+1∑k=0

w(α)k e−(k−1)hλg(x+ (k − 1)h) +

γ2

[ 1−xh

]∑k=0

w(α)k e−khλg(x+ kh)

+γ3

[ 1−xh

]−1∑k=0

w(α)k e−(k+1)hλg(x+ (k + 1)h)− 1

((γ1e

hλ + γ2 + γ3e−hλ)(1− e−hλ)α

)g(x) +O(h2).

(6.54)

[x] is the floor function and h is the grid size of the FD scheme. w(α)k =

α

k

(−1)k =

Γ(k−α)Γ(−α)Γ(k+1)

can be derived recursively via w(α)0 = 1, w

(α)1 = −α,w(α)

k+1 = k−αk+1

w(α)k . The

Page 171: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

148

parameters (γ1, γ2, γ3) shall satisfy 6

γ1 + γ2 + γ3 = 1, γ1 − γ3 =α

2. (6.55)

If the Levy measure is given by Equations (6.12) to (6.27) (when d = 2) with the

Clayton family of copulas to describe the dependence structure between components

of ~L(t), we calculate the Levy measure by considering the Levy measure on each

corner separately as in Equation (6.16) to directly compute the joint PDF P ((~u), t)

from Equation (6.40).

In this paper, we will simulate the moment statistics for the solution of Equation

(6.1) by three methods as shown in Figure 6.9: MC (probabilistic method), PCM

(probabilistic method), and general FP equation combined with unanchored ANOVA

decomposition (deterministic method).

Figure 6.9: An illustration of the three methods used in this paper to solve the moment statisticsof Equation (6.1).

For a general SPDE driven by a multi-dimensional Levy process, we advocate

the following procedure in UQ presented in Figure 6.10.

6The choices of parameters (γ1, γ2, γ3) will affect the accuracy of this FD scheme.

Page 172: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

149

Figure 6.10: An illustration of the three methods used in this paper to solve the moment statisticsof Equation (6.1).

6.7 Heat equation driven by bivariate Levy jump

process in LePage’s representation

In this section, we will solve the heat equation (6.1) with a bivariate pure jump

process with a Levy measure given by Equation (6.9) and a series representation

given in Equation (6.10). Let us take the stochastic force in Equation (6.1) to

be f1(x)dL1(t;ω) + f2(x)dL2(t;ω) (d = 2) and the initial condition to be u0(x) =

f1(x) + f2(x).

6.7.1 Exact moments

The mean of the solution is

E[u(t, x;ω)] =2∑i=1

E[ui(t;ω)]fi(x) = eµD11tf1(x) + eµD22tf2(x). (6.56)

Page 173: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

150

By Ito’s isometry, the second moment of the solution is

E[u2(t, x;ω)] = E[u21]f 2

1 (x) + E[u22]f 2

2 (x) + 2E[u1u2]f1(x)f2(x)

=

[e2µD11t +

(e2µD11t − 1)∫R/0 x

2νx(dx)

2µD11

]f 2

1 (x) +

[e2µD22t

+(e2µD22t − 1)

∫R/0 y

2νy(dy)

2µD22

]f 2

2 (x) + 2eµ(D11+D22)tf1(x)f2(x),

(6.57)

where

∫R/0

x2νx(dx) =

∫R/0

y2νy(dy) = 2

∫ +∞

0

c

πdxx1−α

∫ π2

0

dθe−λx

cos(θ) (cos(θ))α (6.58)

is integrated numerically through the trapezoid rule or the quadrature rules.

In Equations (6.56) and (6.57), u1 and u2 solves the linear system of SODEs

(6.37) in two dimensions:

du1(t) = µD11u1(t)dt+ dL1, u1(0) = 1,

du2(t) = µD22u2(t)dt+ dL2, u2(0) = 1.(6.59)

We will evaluate the performance of numerical methods at different noise-to-signal

ratio (NSR) of the solution, defined as :

NSR =

∥∥∥∥√V ar[u(t, x)]

∥∥∥∥L∞([0,1])∥∥∥∥E[u(t, x)]

∥∥∥∥L∞([0,1])

. (6.60)

We define the L2 error norm of the mean and the second moment of the solution to

be

l2u1(t) =||E[uex(x, t;ω)]− E[unum(x, t;ω)]||L2([0,1])

||E[uex(x, t;ω)]||L2([0,1])

, (6.61)

Page 174: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

151

l2u2(t) =||E[u2

ex(x, t;ω)]− E[u2num(x, t;ω)]||L2([0,1])

||E[u2ex(x, t;ω)]||L2([0,1])

, (6.62)

where uex and unum stand for the exact and the numerical solutions.

6.7.2 Simulating the moment statistics by PCM/S

We calculate the second moment of the solution for Equation (6.1) driven by a

bivariate pure jump process with the series representation in Equation (6.10) by

PCM [184, 192] (PCM/S). PCM is an integration method on the sample space, based

on the Gauss-quadrature rules [59]. If the solution v(Y 1, Y 2, ..., Y d) is a function of

d independent RVs (Y 1, Y 2, ..., Y d)), its m-th moment is approximated by

E[vm(Y 1, Y 2, ..., Y d)] ≈q1∑i1=1

...

qd∑id=1

vm(y1i1, y2i2, ..., ydid)Ω

1i1...Ωd

id, (6.63)

where Ωjij

and yjij are the ij-th Gauss-quadrature weight and collocation point for

Y j respectively. The solutions are evaluated on (Πdi=1qi) deterministic sample points

(y1i1, ..., ydid) in the d-dimensional random space. Therefore, with the series repre-

sentation given in Equation (6.10), the second moment for Equation (6.1) can be

written as

E[u2] ≈∑i=1,2

(e2µDiit +

Q∑j=1

1

8µDiiT(e2µDiit − e2µDii(t−T ))E

[((αΓj2cT

)−1/α ∧ ηjξ1/αj

)2])f 2i (x)

+ 2f1(x)f2(x)eµ(D11+D22)t, t ∈ [0, T ],

(6.64)

where Q is the number of truncations in the series representation. In Chapter 5

we calculated the probability distribution function (PDF) for (αΓj2cT

)−1/α ∧ ηjξ1/αj .

We generate q collocation points for each j ∈ 1, 2, ..., Q (qQ points in total) by

generating quadrature points based on the moments [139, 192]. We also simulate

Page 175: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

152

1 1.5 2 2.5 3 3.5 410−3

10−2

10−1

q

l2u2

(t=1)

PCM/S Q=2PCM/S Q=10PCM/S Q=20

100 102 104 10610−4

10−3

10−2

10−1

s

l2u2

(t=1)

PCM/S q=1PCM/S q=2MC/S Q=40

Figure 6.11: PCM/S (probabilistic) vs. MC/S (probabilistic): error l2u2(t) of the solution forEquation (6.1) with a bivariate pure jump Levy process with the Levy measure in radial decom-position given by Equation (6.9) versus the number of samples s obtained by MC/S and PCM/S(left) and versus the number of collocation points per RV obtained by PCM/S with a fixed numberof truncations Q in Equation (6.10) (right). t = 1 , c = 1, α = 0.5, λ = 5, µ = 0.01, NSR = 16.0%(left and right). In MC/S: first order Euler scheme with time step 4t = 1× 10−3 (right).

Equation (6.59) by MC with series representation (MC/S) in Equation (6.10), by

the first-order Euler scheme in time:

~u(tn+1)− ~u(tn) = ~C(~u(tn))4t+ (~L(tn+1)− ~L(tn)). (6.65)

In Figure 6.11, we first investigate the convergence of the E[u2] in PCM/S with

respect to the number of quadrature points q per RV with fixed values of number

of truncations Q (left), by computing Equation (6.64). The q-convergence is more

effective when Q is larger. The convergence slows down when q > 2. We then

compare, in Figure 6.11 (right), the convergence of E[u2] with respect to the sample

size s between PCM/S and MC/S. In PCM/S, we count the number of samples of

RVs as s = qQ in Equation (6.64). When q = 2, to achieve an error of 10−4, MC/S

with first-order Euler scheme costs 104 more than PCM/S.

Figure 6.12 shows the moment statistics from PCM/S versus from the exact

solution.

Page 176: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

153

0 0.2 0.4 0.6 0.8 1−0.5

0

0.5

1

1.5

2

x

E[u(

t=1,

x)]

mean of solution

0 0.2 0.4 0.6 0.8 10

1

2

3

4

x

E[u2 (t=

1,x)

]

2nd moment of solution

0 0.2 0.4 0.6 0.8 10

0.02

0.04

0.06

0.08

0.1

x

Var[u

(t=1,

x)]

variance of solution

PCM/series V.s. exact solution: 1st and 2nd moments at T=1, noise/signal~4%

parameters:2D TaSproccess:_=0.5,c=1,h=5diffusionµ=0.01final time T=1,noise/sigal~4%

PCM/series:Q=40RelTolE[min(A

j,B

j)] is 1e−8

simulation interval [0,TT is taken to be tCPU time ~ 40 sec

Figure 6.12: PCM/series rep v.s. exact: T = 1. We test the noise/signal=variance/mean ratioto be 4% at T = 1.

Page 177: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

154

0 10 20 30 40 50 6010−5

10−4

10−3

10−2

10−1

Q (# of truncations in series representation)

l2u2

(t=1)

l2 error of 2nd moments versus Q

parameters:2D TaSproccess:_=0.5,c=1,h=5diffusionµ=0.01final timeT=1,noise/sigal~4%

d=1, CPU time for Q=30is 16s

d=2, CPU time for Q=30is 38s

RelTolE[min(Aj,Bj)] is 1e−8

1 1.5 2 2.5 3 3.5 410−3

10−2

10−1

d (# of quad pts)

erro

r l2u

2(t=

1)

d convergence of PCM/series

Q=2Q=10Q=20

parameters:2D TaSproccess:_=0.5,c=1,h=5diffusionµ=0.01final timeT=1,noise/sigal~4%

d=2 is enough

Figure 6.13: PCM/series d-convergence and Q-convergence at T=1. We test thenoise/signal=variance/mean ratio to be 4% at t=1. The l2u2 error is defined as l2u2(t) =||Eex[u2(x,t;ω)]−Enum[u2(x,t;ω)]||L2([0,2])

||Eex[u2(x,t;ω)]||L2([0,2]).

Figure 6.13 shows the convergence in moment statistics versus the truncation in

series representation Q and the number of quadrature points d for each RV in the

series representation.

In Figure 6.15 and ??, we plot the moment statistics evaluated from MC/S versus

that from the exact solutions.

0 0.2 0.4 0.6 0.8 1−0.5

0

0.5

1

1.5

2

x

E[u(

t=1,

x)]

mean of solution

0 0.2 0.4 0.6 0.8 10

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

x

Var[u

(t=1,

x)]

variance of solution

E[u(t=1,x)] from exact solutionE[u(t=1,x)] from MC

Var[u(t=1,x)] from exact solutionVar[u(t=1,x)] from MC

parameters:2D TaS proccess:_=0.5,c=1,h=5diffusion µ=0.01final time T=1,noise/sigal~4%

MC:Q=40, dt=1e−3,s=1e+6

Figure 6.14: MC v.s. exact: T = 1. Choice of parameters of this problem: we evaluated themoment statistics numerically with integration relative tolerance to be 10−8. With this set ofparameter, we test the noise/signal=variance/mean ratio to be 4% at T = 1.

Page 178: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

155

0 0.2 0.4 0.6 0.8 1−0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

x

Var[u

(t=1,

x)]

mean of solution

0 0.2 0.4 0.6 0.8 10

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

x

Var[u

(t=1,

x)]

variance of solution

E[u(t=2,x)] from exact solutionE[u(t=2,x)] from MC

Var[u(t=2,x)] from exact solutionVar[u(t=2,x)] from MC

parameters:2D TaS proccess:_=0.5,c=1,h=5diffusion µ=0.01final time T=2,noise/sigal~10%

MC:Q=40, dt=1e−3,s=1e+6

Figure 6.15: MC v.s. exact: T = 2. Choice of parameters of this problem: we evaluated themoment statistics numerically with integration relative tolerance to be 10−8. With this set ofparameter, we test the noise/signal=variance/mean ratio to be 10% at T = 2.

6.7.3 Simulating the joint PDF P (u1, u2, t) by the generalized

FP equation

We solve the joint PDF P (u1, u2, t) of u1 and u2 in Equation (6.59) from the general-

ized FP Equation (6.41) (when d = 2) for ~L(t) with a Levy measure given by Equa-

tion (6.9). We will solve Equation (6.41) (0 < α < 1) by the second-order Runge-

Kutta method (RK2) with time step 4t and multi-element Gauss-Lobatto-Legendre

(GLL) quadrature points in space. We choose γ1 = 0.5, γ2 = 0.25, γ3 = 0.25 for

the second-order FD scheme in Equation (6.54). We constructed a multi-grid (in

space) solver where the joint PDF P is solved on a cartesian tensor product grid

A (we take the domain to be [−0.5, 2.5] in both u1 and u2 and take 20 elements

uniformly distributed along each axis 7); at each time step for each fixed ~u, the term

− cα

Γ(1 − α)∫Sd−1

Γ(d/2)dσ(~θ)

2πd/2

[rD

α,λ+∞P (~u + r~θ, t)

]∣∣∣∣r=0

is evaluated on a more refined

grid B by interpolating the values of P on grid B from the grid A (here we take grid

B to be 50 equidistant points on (0, 0.5] on r and 40 equidistant points on [0, 2π)

7The domain for (u1, u2) is large enough so that P (u1, u2) < 10−6 on the boundary.

Page 179: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

156

along the angle θ, and the integration along θ is by the trapezoid rule). The initial

condition of Equation (6.41) is obtained by interpolating the MC histogram at t0

onto the query grid A.

Figure 6.16: FP (deterministic) vs. MC/S (probabilistic): joint PDF P (u1, u2, t) of SODEs sys-tem in Equation (6.59) from FP Equation (6.41) (3D contour plot), joint histogram by MC/S (2Dcontour plot on the x-y plane), horizontal (subfigure) and vertical (subfigure) slices at the peaksof density surface from FP equation and MC/S. Final time is t = 1 (left, NSR = 16.0%) andt = 1.5 (right). c = 1, α = 0.5, λ = 5, µ = 0.01. In MC/S: first-order Euler scheme with time step4t = 1× 10−3, 200 bins on both u1 and u2 directions, Q = 40, sample size s = 106. In FP: initialcondition is given by MC data at t0 = 0.5, RK2 scheme with time step 4t = 4× 10−3.

In Figure 6.16, we compute the joint PDF P (u1, u2, t) at final time t = 1 (left)

and t = 1.5 (right) of the SODE system in Equation (6.59) from the Equation (6.41),

with initial condition obtained from the MC/S histogram at t0 = 0.5. We also plot

the MC/S histogram of P (u1, u2, t). First, the peaks of the density surfaces drift

towards smaller values of u1 and u2 because of the ~C(~u, t) term in Equation (6.37)

or the diffusion term in Equation (6.1), comparing the density at t = 1 and t = 1.5

in Figure 6.16. Second, the density surfaces diffuses over time because of the jump

term in Equation (6.1) or Equation (6.37). Third, we show the agreement between

the joint PDF computed from the FP Equation (6.41) and the MC by plotting the

horizontal and vertical slices at the peak of the two density surfaces. They agree

well both at t = 1 and t = 1.5. This shows the reliability and accuracy of our

Page 180: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

157

computation of TFPDE in Equation (6.41) over time.

6.7.4 Simulating moment statistics by TFPDE and PCM/S

We compute E[u2(t, x;ω)] by the joint PDF P (u1, u2, t) from the TFPDE (6.59):

E[u2(t, x;ω)] =

∫R2

du1du2P (u1, u2, t)

[u2

1f21 (x)+u2

2f22 (x)+2u1u2f1(x)f2(x)

]. (6.66)

We approximate the initial condition P (u1, u2, t = 0) = δ((u1, u2) − (u1(0), u2(0)))

by the delta sequence [4] with Gaussian functions:

δGk =k

πexp(−k(u1−u1(0))2) exp(−k(u2−u2(0))2), lim

k→+∞

∫R2

δGk (~x)g(~x)dx = g(0).

(6.67)

0.2 0.4 0.6 0.8 110−10

10−8

10−6

10−4

10−2

l2u2

(t)

t

PCM/S Q=5, q=2PCM/S Q=10, q=2TFPDE

NSR 5 4.8%

0.2 0.4 0.6 0.8 110−7

10−6

10−5

10−4

10−3

10−2

l2u2

(t)

t

PCM/S Q=10, q=2PCM/S Q=20, q=2TFPDE

NSR 5 6.4%

Figure 6.17: TFPDE (deterministic) vs. PCM/S (probabilistic): error l2u2(t) of the solution forEquation (6.1) with a bivariate pure jump Levy process with the Levy measure in radial decompo-sition given by Equation (6.9) obtained by PCM/S in Equation (6.64) (stochastic approach) andTFPDE in Equation (6.41) (deterministic approach) versus time. α = 0.5, λ = 5, µ = 0.001 (leftand right). c = 0.1 (left); c = 1 (right). In TFPDE: initial condition is given by δG2000 in Equation(6.67), RK2 scheme with time step 4t = 4× 10−3.

We observe three things from Figure 6.17: 1) by comparing the l2u2(t) lines

versus time from PCM/S and from TFPDE, we conclude that the error accumulates

Page 181: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

158

slower by TFPDE than PCM/S; 2) by comparing the l2u2 errors from PCM/S and

TFPDE at t = 0.1, we see that the error from the TFPDE method mainly comes from

the approximation of the initial condition (by using the Gaussian kernel in Equation

(6.67) to approximate the delta function), not from the solver of Equation (6.41); 3)

by comparing the left and right plots in Figure 6.17, when the jump intensity c is

10 times stronger, the l2u2 error from PCM is 102 times larger, but l2u2 error from

the TFPDE is only 10 times larger.

6.8 Heat equation driven by bivariate TS Clayton

Levy jump process

In this section, we solve the heat equation (6.1) with a bivariate TS Clayton Levy

process with a Levy measure given in Section 1.2.2. The dependence structure

between components of ~L(t) is described by the Clayton Levy copula in Equations

(6.12) and (6.13 ) with the correlation length τ . ~L(t) has two series representations

in Equations (6.30) and (6.32). Let us take the stochastic force in Equation (6.1) to

be f1(x)dL1(t;ω) + f2(x)dL2(t;ω) (d = 2) and the initial condition to be u0(x) =

f1(x) + f2(x).

6.8.1 Exact moments

The mean of the solution is

E[u(t, x;ω)] =2∑i=1

E[ui(t;ω)]fi(x) = eµD11tf1(x) + eµD22tf2(x). (6.68)

Page 182: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

159

Let us briefly denote the series representation as (take [0, 1] as the time interval

for series representation of the Levy process, or T = 1)

L++1 (s) ≈

Q∑j=1

J++1j I[0,s](Vj), (6.69)

and

L++2 (s) ≈

Q∑j=1

J++2j I[0,s](Vj), (6.70)

where J++1j and J++

2j are jump sizes. Therefore we can write L1(t) as

L1(s) ≈Q∑j=1

J++1j I[0,s](V

++j )−

Q∑j=1

J−+1j I[0,s](V

−+j )−

Q∑j=1

J−−1j I[0,s](V−−j )+

Q∑j=1

J+−1j I[0,s](V

+−j )

(6.71)

. We define (the same for + -, - +, and - - parts)

I++1 =

∫ t

0

eµD11(t−τ)dL++1 (τ) ≈

Q∑j=1

J++1j e

µD11(t−V ++j )I[0,s](V

++j ). (6.72)

By the symmetry of two components of the process (L1, L2) and the symmetry

of the Levy copula F , we have

E[(∫ t

0

eµD11(t−τ)dL1(τ)

)2]= 4E[I++2

1 ]− 4(E[I++1 ])2, (6.73)

where

E[I++21 ] =

Q∑j=1

E[J++21j ]E[e2µD11(t−V ++

j )] =

(e2µD11t − 1

2µD11

) Q∑j=1

E[J++21j ] (6.74)

Page 183: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

160

and

E[I++1 ] =

Q∑j=1

E[J++1j ]E[eµD11(t−V ++

j )] =

(eµD11t − 1

µD11

) Q∑j=1

E[J++1j ]. (6.75)

Therefore

E[u21(t)] = u2

1(0)e2µD11t+2

(e2µD11t − 1

µD11

)( Q∑j=1

E[J++21j ]

)−4

(eµD11t − 1

µD11

)2( Q∑j=1

E[J++1j ]

)2

,

(6.76)

where J++1j = (

αΓj2cT

)−1/α ∧ ηjξ1/αj that we have the explicit form of its density.

Similarly,

E[u22(t)] = u2

2(0)e2µD22t+2

(e2µD22t − 1

µD22

)( Q∑j=1

E[J++22j ]

)−4

(eµD22t − 1

µD22

)2( Q∑j=1

E[J++2j ]

)2

,

(6.77)

where J++2j = U

(−1)2

(F−1(Wi

∣∣∣∣U1(αΓj2cT

)−1/α ∧ ηjξ1/αj )

), which can be computed nu-

merically. (Because of the symmetries, we only deal with the two-dimensional TαS

Clayton subordinator in the ++ corner of the R2 plane.)

We will calculate the quadrature points of J++1j and J++

2j for the integration.

Also,

E[u1(t)u2(t)] = u1(0)u2(0)eµ(D11+D22)t. (6.78)

Therefore the 2nd moment can be computed by

E[u2] = E[u21(t)]f 2

1 (x) + E[u22(t)]f 2

2 (x) + 2E[u1(t)u2(t)]f1(x)f2(x). (6.79)

Page 184: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

161

By Ito’s isometry, the second moment of the solution is

E[u2(t, x;ω)] = E[u21]f 2

1 (x) + E[u22]f 2

2 (x) + 2E[u1u2]f1(x)f2(x)

=

[e2µD11t +

c(e2µD11t − 1)(∫ +∞

0e−λzz1−αdz)

µD11

]f1(x) +

[e2µD11t

+c(e2µD22t − 1)(

∫ +∞0

e−λzz1−αdz)

µD22

]f2(x) + 2eµ(D11+D22)tf1(x)f2(x),

(6.80)

In Figure 6.18, we plot the exact mean and second moment from Equations (6.68)

and (6.80).

0 0.5 1 00.5

1−0.5

0

0.5

1

1.5

2

2.5

time t

evolution of mean versus time

x

E[u(

t,x)]

00.5

1 00.5

10

0.1

0.2

0.3

0.4

time t

evolution of variance versus time

x

E[u2 (t,

x)]−

E[u(

t,x)]2

0 0.2 0.4 0.6 0.8 10

0.05

0.1

0.15

0.2

time t

max

(var

ianc

e)/m

ax(m

ean)

percentage of noise/signal

marginal processes as TaS processes:c=1, _=0.5, h=10heat diffusion: µ=0.01++, −−, +−, −+ are all dependent by Clayton copulaswith the same dependent structure parameter

Figure 6.18: Exact mean, variance, and NSR versus time. The noise/signal ratio is 10% atT = 0.5.

Page 185: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

162

6.8.2 Simulating the moment statistics by PCM/S

We compute the second moment of the solution for the heat equation (6.1) driven

by a bivariate TS Clayton Levy process with Levy measure given in Section 1.2.2.

We use the series representation in Equation (6.32) for PCM/S because the RVs in

the series representation (6.30) are not fully independent. By the assumption of the

symmetry of the Levy measure ν(~z) = ν(|~z|), the second moment for Equation (6.1)

can be written as

E[u2] ≈[e2µD11t + 2

(e2µD11t − 1

µD11T

)( Q∑j=1

E[J++21j ]

)− 4

(eµD11t − 1

µD11T

)2( Q∑j=1

E[J++1j ]

)2]f1(x)

+

[e2µD22t + 2

(e2µD22t − 1

µD22T

)( Q∑j=1

E[J++22j ]

)− 4

(eµD22t − 1

µD22T

)2( Q∑j=1

E[J++2j ]

)2]f2(x)

+ 2eµ(D11+D22)tf1(x)f2(x), t ∈ [0, T ],

(6.81)

where J++1j = (

αΓj2cT

)−1/α∧ηjξ1/αj and J++

2j = U++(−1)2

(F−1(Wi

∣∣∣∣U++1 (

αΓj2cT

)−1/α∧ηjξ1/αj )

)as in Equation (6.32). In PCM/S, we generate q collocation points for J1j, j =

1, ..., Q and J2j, j = 1, ..., Q with s = 2qQ points in total. We also compute

Equation (6.59) by MC with series representation (MC/S) with s samples of Equa-

tion (6.30), by the first-order Euler scheme given in Equation (6.65).

We show the Q-convergence (with various λ) of PCM/S in Equation (6.64) in

Figure 6.20.

We investigate the q-convergence and Q-convergence of E[u2] by PCM/S by com-

puting Equation (6.81) in Figure 6.19 (left) with respect to different NSR values: the

Q-convergence is faster when q is larger; the convergence of E[u2] slows down when

Q ≥ 2 restricted by the convergence rate of the series representation given in Equa-

Page 186: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

163

1 1.5 2 2.5 3 3.5 410−4

10−3

10−2

10−1

Q

l2u2

(t=1)

PCM/S q=1, c=0.1PCM/S q=2, c=0.1PCM/S q=2, c=0.05PCM/S q=2, c=0.025

NSR = 10.1%

NSR = 10.1%

NSR = 7.2%

NSR = 5.1%

102 103 104

10−4

10−3

10−2

10−1

s

l2u1

(t=1)

, l2u

2(t=

1)

MC/S l2u1(t=1)MC/S l2u2(t=1)

C*s−1/2

Figure 6.19: PCM/S (probabilistic) vs. MC/S (stochastic): error l2u2(t) of the solution for Equa-tion (6.1) driven by a bivariate TS Clayton Levy process with Levy measure given in Section 1.2.2versus the number of truncations Q in the series representation (6.32) by PCM/S (left) and versusthe number of samples s in MC/S with the series representation (6.30) by computing Equation(6.59) (right). t = 1 , α = 0.5, λ = 5, µ = 0.01, τ = 1 (left and right). c = 0.1, NSR = 10.1%(right). In MC/S: first order Euler scheme with time step 4t = 1× 10−2 (right).

1 2 3 4 5 6 710−4

10−3

10−2

Q

l2u2

erro

r

Q−convergence of l2u2 error/h=10

1 2 3 4 5 6 710−5

10−4

10−3

10−2

Q

l2u2

erro

r

Q−convergence of l2u2 error h=5

1 2 3 4 5 6 710−7

10−6

10−5

10−4

10−3

Q

l2u2

erro

r

Q−convergence of l2u2 error h=1

1 2 3 4 5 6 710−9

10−8

10−7

10−6

10−5

Q

l2u2

erro

r

Q−convergence of l2u2 error h=0.05

4.89% noise, c=0.15

7.99% noise, c=0.48.23% noise, c=0.15

6.72% noise, c=0.1

3.36% noise, c=0.025

2.83% noise, c=0.05

7.11% noise, c=0.01

5.02% noise, c=0.005

2.25% noise, c=0.001

6.72% noise, c=1e−4

3.36% noise, c=2.5e−5

9.50% noise, c=2e−4

noise is 9.5%, Q=2 achieves 1e−6 accuracy

Figure 6.20: Q-convergence (with various λ) of PCM/S in Equation (6.64):α = 0.5, µ = 0.01,RelTol of integration of moments of jump sizes is 1e-8.

Page 187: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

164

tion (6.32)8; the q-convergence of E[u2] is restricted by the regularity of the PDF of

J++1j s and J++

2j s in Equation (6.81) as given in Chapter 5; PCM/S is more accurate

when NSR value is smaller. We also plot the s-convergence from the MC/S with

series representation in Equation (6.32) with a fixed Q = 2 in Figure 6.19 (right):

the s−1/2 convergence is achieved by the first and the second moments. In PCM/S,

s = 2qQ. Now let us compare the error lines for c = 0.1, α = 0.5, λ = 5 on the

left and right figures in Figure 6.19: the MC/S is less accurate than PCM/S for a

smaller sample size (around 100), however MC/S has a faster convergence rate than

PCM/S due to the slow Q-convergence rate in the series representation (6.30).

6.8.3 Simulating the joint PDF P (u1, u2, t) by the generalized

FP equation

We solve the joint PDF P (u1, u2, t) in Equation (6.59) from the generalized FP

Equation (6.40) (0 < α < 1) for ~L(t) with a Levy measure given in Section 1.2.2.

We will solve Equation (6.40) in the same scheme as described in Section 2.3: the

RK2 in time with time step 4t and the multi-grid solver in space. We constructed

the same multi-grid (in space) solver, as in Section 2.3, where the joint PDF P is

solved on a cartesian tensor product grid A (a domain of [−0.5, 2.5] in both u1 and

u2 with 20 elements uniformly distributed along each axis); at each time step for

each fixed ~u, the integral term in Equation (6.40) is evaluated on a refined grid B by

interpolating the values of P on grid B from the grid A (here we take grid B to be

a tensor product of 21 uniformly distributed points on [−0.1, 0.1] in each direction).

In Figure 6.21, we compute the joint PDF P (u1, u2, t = 1) of SODEs system in

Equation (6.59) from the FP Equation (6.40), with initial condition given by δG1000.

8Therefore, on the right figure in Figure 6.19 we used Q = 2 for MC/S.

Page 188: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

165

Figure 6.21: FP (deterministic) vs. MC/S (probabilistic): joint PDF P (u1, u2, t) of SODE systemin Equation (6.59) from FP Equation (6.40) (three-dimensional contour plot), joint histogram byMC/S (2D contour plot on the x-y plane), horizontal (left, subfigure) and vertical (right, subfigure)slices at the peak of density surfaces from FP equation and MC/S. Final time t = 1 (left) andt = 1.5 (right). c = 0.5, α = 0.5, λ = 5, µ = 0.005, τ = 1 (left and right). In MC/S: first-orderEuler scheme with time step 4t = 0.02, Q = 2 in series representation (6.30), sample size s = 104.40 bins on both u1 and u2 directions (left); 20 bins on both u1 and u2 directions (right). In FP:initial condition is given by δG1000 in Equation (6.67), RK2 scheme with time step 4t = 4× 10−3.

We also plot the MC/S histogram of P (u1, u2, t = 1). We show the agreement of

the deterministic approach (FP equation) and the stochastic approach (MC/S) by

computing the joint PDF and plotting the horizontal and vertical slices of two density

surfaces at the peak. Let us compare Figure 6.16 from LePage’s representation and

Figure 6.21 from Levy copula: 1) the MC/S simulation with Levy copula costs more

than 100 times of CPU time than that from the LePage’s representation per sample;

2) in Figure 6.16, the horizontal and vertical slices at the peak of densities from MC

and the generalized FP equation matched at t = 1 with NSR = 16.0% much better

than that from Figure 6.21 at t = 1 with NSR = 11.2%.

6.8.4 Simulating moment statistics by TFPDE and PCM/S

We compute the second moment E[u2(t, x;ω)] by Equation (6.66) after computing

the joint PDF P (u1, u2, t) from Equation (6.40) for solutions of Equation (6.59). The

Page 189: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

166

Levy measure in Equation (6.59) is given in Section 1.2.2, that the components of

~L(t) are correlated by the Clayton Levy copula. The initial condition of Equation

(6.40) is given by Equation (6.67).

0.2 0.4 0.6 0.8 110−5

10−4

10−3

10−2

t

l2u2

(t)

TFPDEPCM/S Q=1, q=2PCM/2 Q=2, q=2

NSR 5 6.4%

0.2 0.4 0.6 0.8 110−3

10−2

10−1

100

tl2

u2(t)

TFPDEPCM/S Q=2, q=2PCM/S Q=1, q=2

NSR 5 30.1%

Figure 6.22: TFPDE (deterministic) vs. PCM/S (stochastic): error l2u2(t) of the solution forEquation (6.1) driven by a bivariate TS Clayton Levy process with Levy measure given in Section1.2.2 versus time obtained by PCM/S in Equation (6.81) (stochastic approach) and TFPDE (6.40)(deterministic approach). c = 1, α = 0.5, λ = 5, µ = 0.01 (left and right). c = 0.05, µ = 0.001(left). c = 1, µ = 0.005 (right). In TFPDE: initial condition is given by δG1000 in Equation (6.67),RK2 scheme with time step 4t = 4× 10−3.

In Figure 6.22, we compute the error l2u2(t) defined in Equation (6.62) versus

time by both the deterministic method (TFPDE in Equation (6.40)) and the stochas-

tic method (PCM/S in Equation (6.81)). As the NSR defined in Equation (6.60)

grows with respect to time, the errors from both methods grows. In Figure 6.22,

PCM/S with Q = 2, q = 2 is ten times faster in CPU time than TFPDE’s approach,

however PCM/S is ten times more accurate than TFPDE at t = 1. However, PCM/S

is not always this fast

Page 190: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

167

6.9 Heat equation driven by 10-dimensional Levy

jump processes in LePage’s representation

In this section, we solve the heat equation (6.1) with a 10-dimensional pure jump

process with a Levy measure given by Equation (6.9) (d = 10) and a series represen-

tation given in Equation (6.10). Let us take the stochastic force in Equation (6.1)

to be∑d=10

i=1 fi(x)dLi(t;ω) and the initial condition to be u0(x) =∑d=10

i=1 fi(x).

6.9.1 Heat equation driven by 10-dimensional Levy jump

processes from MC/S

We first simulate the empirical histogram of the solution for the SODE system (6.37)

when d = 10 from MC/S with series representation in Equation (6.10) and by the

first-order Euler scheme in time as in Equation (6.65). We then obtain the second

moments E[u2] of the heat equation (6.1) from the MC/S histogram. In Figure

6.23, we ran the MC/S simulation for s = 5 × 103, 1 × 104, 2 × 104, 4 × 104, 1 × 106

samples. By using the E[u2] from MC/S with s = 1 × 106 samples as a reference,

we plotted the difference between E[u2] computed from various sample sizes s and

that from s = 1× 106 (on the left), and the L2 norm (over the spatial domain [0, 1])

of these differences (on the right). Figure 6.23 shows the s−1/2 convergence rate

in simulating the second moments by MC/S is achieved, with sufficient large Q as

the number of truncations in the series representation (6.10). We may visualize the

two-dimensional marginal distributions from the empirical joint histogram from the

MC/S as in Figures 6.24 and 6.25.

We show the moment statistics of the solution for the heat equation (6.1) driven

Page 191: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

168

0 0.2 0.4 0.6 0.8 10

1

2

3

4

5

6

7x 10−3

x

rela

tive

diffe

renc

e in

E[u

2 ]

104 105

10−4

10−3

s

l2 n

orm

of r

elat

ive

diffe

renc

e in

E[u

2 ]

||E[u2MC(s)−E[u2

MC(s=106)]||L2([0,1])/||E[u2MC(s=106)]||L2([0,1])

C * s−1/2

|E[u2MC(s=5x103)] − E[u2

MC(s=106)]|

|E[u2MC(s=1x104)] − E[u2

MC(s=106)]|

|E[u2MC(s=2x104)] − E[u2

MC(s=106)]|

|E[u2MC(s=4x104)] − E[u2

MC(s=106)]|

Figure 6.23: S-convergence in MC/S with 10-dimensional Levy jump processes: difference in the

E[u2] (left) between different sample sizes s and s = 106 (as a reference). The heat equation (6.1) isdriven by a 10-dimensional jump process with a Levy measure (6.9) obtained by MC/S with seriesrepresentation (6.10). We show the L2 norm of these differences versus s (right). Final time T = 1,c = 0.1, α = 0.5, λ = 10, µ = 0.01, time step 4t = 4 × 10−3, and Q = 10. The NSR at T = 1 is6.62%.

Figure 6.24: Samples of (u1, u2) (left) and joint PDF of (u1, u2, ..., u10) on the (u1, u2) plane byMC (right) : c = 0.1, α = 0.5, λ = 10, µ = 0.01,dt = 4e − 3 (first order Euler scheme), T = 1,Q = 10 (number of truncations in the series representation), and sample size s = 106.

Page 192: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

169

Figure 6.25: Samples of (u9, u10) (left) and joint PDF of (u1, u2, ..., u10) on the (u9, u10) planeby MC (right) : c = 0.1, α = 0.5, λ = 10, µ = 0.01,dt = 4e − 3 (first order Euler scheme), T = 1,Q = 10 (number of truncations in the series representation), and sample size s = 106.

by a 10-dimensional jump process with a Levy measure (6.9) obtained by MC/S with

series representation (6.10) in Figure 6.26.

6.9.2 Heat equation driven by 10-dimensional Levy jump

processes from PCM/S

We simulate the second moment E[u2] of heat equation (6.1) driven by a 10-dimensional

pure jump process with a Levy measure given by Equation (6.9) and a series repre-

sentation (6.10) by the same PCM/S method described in Section 2.2, except that

here d = 10 instead of d = 2.

In Figure 6.27, we ran the PCM/S simulation for the number of truncations

Q = 1, 2, 4, 8, 16 in the series representation (6.10). By using the E[u2] from PCM/S

with Q = 16 as a reference, we plotted the difference between E[u2] from other values

of Q and that from Q = 16 (on the left), and the L2 norm (over the spatial domain

Page 193: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

170

0 0.2 0.4 0.6 0.8 10

2

4

6

8

10

12

x

E[u(

x,T=

0.5)

] and

E[u

2 (x,T

=0.5

)]

moments for heat equation at T=0.5

0 0.2 0.4 0.6 0.8 10

2

4

6

8

10

12

x

E[u(

x,T=

1)] a

nd E

[u2 (x

,T=1

)]

moments for heat equation at T=1

E[u]

E[u2]

E[u]

E[u2]

Figure 6.26: First two moments for solution of the heat equation (6.1) driven by a 10-dimensionaljump process with a Levy measure (6.9) obtained by MC/S with series representation (6.10) at finaltime T = 0.5 (left) and T = 1 (right) by MC : c = 0.1, α = 0.5, λ = 10, µ = 0.01, dt = 4e− 3 (withthe first order Euler scheme), Q = 10, and sample size s = 106.

0 0.2 0.4 0.6 0.8 10

0.5

1

1.5

2

2.5x 10

−3

x

diffe

renc

e in

the

2nd

mom

ents

|E[u2PCM

(Q=1)]−E[u2PCM

(Q=16)]|

|E[u2

PCM(Q=2)]−E[u2

PCM(Q=16)]|

|E[u2PCM

(Q=4)]−E[u2PCM

(Q=16)]|

|E[u2

PCM(Q=8)]−E[u2

PCM(Q=16)]|

1 2 3 4 5 6 7 810

−6

10−5

10−4

10−3

Q

L 2 nor

m o

f rel

ativ

e di

ffere

nce

in E

[u2 ]

||E[u2PCM

(Q)−E[u2PCM

(16)]||L2

/||E[u2PCM

(16)]||L2

Figure 6.27: Q-convergence in PCM/S with 10-dimensional Levy jump processes: difference in

the E[u2] (left) between different series truncation order Q and Q = 16 (as a reference). The heatequation (6.1) is driven by a 10-dimensional jump process with a Levy measure (6.9) obtained byMC/S with series representation (6.10). We show the L2 norm of these differences versus Q (right).Final time T = 1, c = 0.1, α = 0.5, λ = 10, µ = 0.01. The NSR at T = 1 is 6.62%.

Page 194: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

171

[0, 1]) of these differences (on the right). Figure 6.27 shows that by the PCM/S

method, the simulation of E[u2] converges with respect to Q. In Figure 6.28, we

0 0.2 0.4 0.6 0.8 10

0.5

1

1.5

2

2.5

3x 10−3

x

diffe

renc

e in

mom

ents

from

MC

and

PC

M

0 0.2 0.4 0.6 0.8 10

0.5

1

1.5

2

2.5

3x 10−3

diffe

renc

e in

mom

ents

from

MC

and

PC

M

x

|E[uPCM]−E[uMC]|

|E[uPCM2 ]−E[uMC

2 ]|

|E[uPCM]−E[uMC]|

|E[uPCM2 ]−E[uMC

2 ]|

0 0.5 10

1

2

3

4

5

6

x

mom

ents

at T

=1

E[uPCM]

E[u2PCM]

NSR = 4.75%T=0.5

NSR = 6.62%T=1

Figure 6.28: MC/S V.s. PCM/S with 10-dimensional Levy jump processes: difference between

the E[u2] computed from MC/S and that computed from PCM/S at final time T = 0.5 (left) andT = 1 (right). The heat equation (6.1) is driven by a 10-dimensional jump process with a Levymeasure (6.9) obtained by MC/S with series representation (6.10). c = 0.1, α = 0.5, λ = 10, µ =0.01. In MC/S, time step 4t = 4× 10−3, Q = 10. In PCM/S, Q = 16.

show that both the MC/S and PCM/S methods converge to the same solution for

the heat equation (6.1) by computing the difference of E[u] and E[u2] between MC/S

and PCM/S at two final time T = 0.5, and 1.

6.9.3 Simulating the joint PDF P (u1, u2, ..., u10) by the ANOVA

decomposition of the generalized FP equation

We solve the marginal PDF pi(ui, t) from Equation (6.49) for ANOVA with effective

dimension κ = 1 (1D-ANOVA-FP) and the joint PDF pij(ui, uj, t) from Equation

(6.50) for ANOVA with effective dimension κ = 2 (2D-ANOVA-FP). We compute

the moment statistics (E[u] and E[u2]) of the heat equation (6.1) driven by a 10-

dimensional pure jump process with a Levy measure given by Equation (6.9) from

Page 195: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

172

1D-ANOVA-FP and 2D-ANOVA-FP. We also compute the moments from PCM/S

discussed in Section 4.2 as a reference.

ANOVA decomposition of initial condition for 1D-ANOVA-FP and 2D-

ANOVA-FP

We first explain why we do not use the tensor product of Gaussian functions (as one of

the delta sequences) to approximate the delta function for the density P (u1, u2, ..., u10)

at t = 0 as initial conditions for the 1D-ANOVA-FP and 2D-ANOVA-FP solvers.

We will use the standard ANOVA with uniform measure here . First we approx-

imate the P (~x, t = 0) = δ(~x− (1, 1, ..., 1)) by a product of 10 Gaussian functions as

(we will use the same parameter A to adjust the sharpness of the Gaussian kernel in

all dimensions):

P (~x, t = 0) =1

(Aπ)d/2Πd=10i=1 exp

[− (xi − 1)2

A

]. (6.82)

Then by setting the ’measure’ (µ) in ANOVA decomposition to be the uniform

measure we have :

P0(t = 0) =

∫RdP (~x, t = 0)dµ(x) = 1; (6.83)

for 1 ≤ i ≤ d,

Pi(xi, t = 0) =1

(Aπ)1/2exp[−(xi − 1)2

A]− 1; (6.84)

Page 196: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

173

for 1 ≤ i, j ≤ d,

Pij(xi, xj, t = 0) =

(1

(Aπ)1/2exp[−(xi − 1)2

A]

)(1

(Aπ)1/2exp[−(xj − 1)2

A]

)−(

1

(Aπ)1/2exp[−(xi − 1)2

A]− 1

)−(

1

(Aπ)1/2exp[−(xj − 1)2

A]− 1

)− 1

=

[1

(Aπ)1/2exp[−(xi − 1)2

A]− 1

][1

(Aπ)1/2exp[−(xj − 1)2

A]− 1

](6.85)

In Figures 6.29, 6.30, and 6.31, we take d = 3 in Equation (6.82), we plot the

original function (as a product of three Gaussian functions) and the approximated

function by ANOVA with an effective dimension of two. We plot the function on

the x1-x2 plane by fixing a value of x3. By choosing different values of A (with

different sharpness in the original tensor product function in Equation (6.82)), we

observe that the sharper the product function, the more it differs from the ANOVA

approximation of it with effective dimension of two. However, we know that in order

to approximate the initial condition P (~x, t = 0) = δ(~x − (1, 1, ..., 1)), we need a

very sharp peak to approximate the initial condition, otherwise, we introduce error

starting from the initial condition.

Moment statistics of the heat equation with 10-dimensional Levy pro-

cesses by 1D-ANOVA-FP and 2D-ANOVA-FP

Therefore, we run the MC/S simulation up to time t0 and take the empirical his-

tograms along one or two variables to be the initial conditions of Equations (6.49)

and (6.50) for marginal distributions. Both Equations (6.49) and (6.50) are simu-

lated on multi-grid solvers similar to the one described in Section 2.3. For example,

in Equation (6.50), we evaluate the first two terms on the right hand side on a tensor

product grid of two uniformly distributed meshes with M elements on each direction

Page 197: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

174

0.51

1.5

0.51

1.5

0.10.15

0.20.25

0.3

x1

orginal 3D Gaussian kernel w/ fixed x3=0.53

x20.5

11.5

0.51

1.50.220.240.260.28

0.30.320.34

x1

ANOVA approximated 3D Gaussian kernel w/ fixed x3=0.53

x2

0.51

1.5

0.51

1.5

0.2

0.3

0.4

0.5

x1

orginal 3D Gaussian kernel w/ fixed x3=0.95

x20.5

11.5

0.51

1.50.2

0.3

0.4

0.5

x1

ANOVA approximated 3D Gaussian kernel w/ fixed x3=0.95

x2

3D Gaussian Kerneleffective dim = 2A=0.5

Figure 6.29: The function in Equation (6.82) with d = 2 (left up and left down) and the ANOVAapproximation of it with effective dimension of two (right up and right down). A = 0.5, d = 2.

0.51

1.5

0.51

1.5

0.2

0.4

0.6

x1

orginal 3D Gaussian kernel w/ fixed x3=0.53

x20.5

11.5

0.51

1.5−0.5

0

0.5

1

x1

ANOVA approximated 3D Gaussian kernel w/ fixed x3=0.53

x2

0.51

1.5

0.51

1.5

12345

x1

orginal 3D Gaussian kernel w/ fixed x3=0.95

x20.5

11.5

0.51

1.5

0

2

4

x1

ANOVA approximated 3D Gaussian kernel w/ fixed x3=0.95

x2

3D Gaussian Kerneleffective dim = 2A=0.1

Figure 6.30: The function in Equation (6.82) with d = 2 (left up and left down) and the ANOVAapproximation of it with effective dimension of two (right up and right down). A = 0.1, d = 2.

Page 198: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

175

0.51

1.5

0.51

1.5

1

2

3

4

x 10−8

x1

orginal 3D Gaussian kernel w/ fixed x3=0.53

x20.5

11.5

0.51

1.5

05

101520

x1

ANOVA approximated 3D Gaussian kernel w/ fixed x3=0.53

x2

0.51

1.5

0.51

1.5

20406080

100120

x1

orginal 3D Gaussian kernel w/ fixed x3=0.95

x20.5

11.5

0.51

1.50

20

40

60

x1

ANOVA approximated 3D Gaussian kernel w/ fixed x3=0.95

x2

3D Gaussian Kerneleffective dim = 2A=0.01

Figure 6.31: The function in Equation (6.82) with d = 2 (left up and left down) and the ANOVAapproximation of it with effective dimension of two (right up and right down). A = 0.01, d = 2.

and q GLL collocation points on each element (grid A). We evaluate the last frac-

tional derivative term in Equation (6.50) by the FD scheme (6.54) on a more refined

equidistant grid (grid B) in grid size h. We take γ1 = 0, γ2 = 1 + α2, and γ3 = −α

2

in the FD scheme (6.54). At each time, we obtain the values of the solution on the

query grid B by interpolating them from the grid A.

In Figure 6.32, we compute E[u] of heat equation (6.1) driven by a 10-dimensional

jump process with a Levy measure (6.9) by ANOVA decomposition of joint PDF

P (u1, u2, ..., u10) at effective dimension κ of 1 and 2 (1D-ANOVA-FP in Equation

(6.49) and 2D-ANOVA-FP in Equation (6.50)). We also compute E[u] from the

PCM/S with truncation Q = 10 in the series representation (6.10) as a reference.

First, Figure 6.32 shows that the mean E[u] computed from 1D-ANOVA-FP and 2D-

ANOVA-FP both differ with that computed from PCM/S at the order of 1×10−4 for

this 10-dimensional problem. Second, in Figure 6.32, the error from ANOVA grow

Page 199: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

176

0 0.2 0.4 0.6 0.8 1−2

0

2

4

6

8

10

12

x

E[u(

x,T=

1)]

E[uPCM]E[u1D−ANOVA−FP]E[u2D−ANOVA−FP]

0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 13.4

3.5

3.6

3.7

3.8

3.9

4x 10−4

T

L 2 nor

m o

f diff

eren

ce in

E[u

]

||E[u1D−ANOVA−FP−E[uPCM]||L2([0,1])/||E[uPCM]||L2([0,1])

||E[u2D−ANOVA−FP−E[uPCM]||L2([0,1])/||E[uPCM]||L2([0,1])

Figure 6.32: 1D-ANOVA-FP V.s. 2D-ANOVA-FP with 10-dimensional Levy jump processes:the mean (left) for the solution of the heat equation (6.1) driven by a 10-dimensional jump processwith a Levy measure (6.9) computed by 1D-ANOVA-FP, 2D-ANOVA-FP, and PCM/S. The L2

norms of difference in E[u] between these three methods are plotted versus final time T (right).c = 1, α = 0.5, λ = 10, µ = 10−4. In 1D-ANOVA-FP: 4t = 4 × 10−3 in RK2, M = 30 elements,q = 4 GLL points on each element. In 2D-ANOVA-FP: 4t = 4× 10−3 in RK2, M = 5 elements oneach direction, q2 = 16 GLL points on each element. In PCM/S: Q = 10 in the series representa-tion (6.10). Initial condition of ANOVA-FP: MC/S data at t0 = 0.5, s = 1× 104, 4t = 4× 10−3.NSR ≈ 18.24% at T = 1.

slowly with respect to time (on the right). At T = 0.6, the error, at the order of

1× 10−4, mainly comes from the initial condition by MC/S.

In Figure 6.33, we compute E[u2] of heat equation (6.1) driven by a 10-dimensional

jump process with a Levy measure (6.9) by ANOVA decomposition of joint PDF

P (u1, u2, ..., u10) at effective dimension κ of 1 and 2 (1D-ANOVA-FP and 2D-ANOVA-

FP) as

E[u2(x, t)] =d=10∑k=1

E[u2k(t)]f

2k (x) + 2

d−1=9∑i=1

d=10∑j=i+1

E[uiuj]fi(x)fj(x). (6.86)

In 1D-ANOVA-FP, we compute E[uiuj] by E[ui]E[uj] with marginal distributions

pi(ui, t) and pj(uj, t). In 1D-ANOVA-FP, we compute E[uiuj] by two-dimensional

the marginal distribution pij(ui, uj, t). We also compute E[u2] from the PCM/S with

Page 200: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

177

0 0.2 0.4 0.6 0.8 10

20

40

60

80

100

120

x

E[u2 (x

,T=1

)]

E[u2PCM]

E[u21D−ANOVA−FP]

E[u22D−ANOVA−FP]

0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 10

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

T

L 2 nor

m o

f diff

eren

ce in

E[u

2 ]

||E[u21D−ANOVA−FP−E[u2

PCM]||L2([0,1])/||E[u2PCM]||L2([0,1])

||E[u22D−ANOVA−FP−E[u2

PCM]||L2([0,1])/||E[u2PCM]||L2([0,1])

Figure 6.33: 1D-ANOVA-FP V.s. 2D-ANOVA-FP with 10-dimensional Levy jump processes:the second moment (left) for the solution of heat equation (6.1) driven by a 10-dimensional jumpprocess with a Levy measure (6.9) computed by 1D-ANOVA-FP, 2D-ANOVA-FP, and PCM/S.The L2 norms of difference in E[u2] between these three methods are plotted versus final time T(right). c = 1, α = 0.5, λ = 10, µ = 10−4. In 1D-ANOVA-FP: 4t = 4 × 10−3 in RK2, M = 30elements, q = 4 GLL points on each element. In 2D-ANOVA-FP: 4t = 4 × 10−3 in RK2, M = 5elements on each direction, q2 = 16 GLL points on each element. Initial condition of ANOVA-FP:MC/S data at t0 = 0.5, s = 1×104, 4t = 4×10−3. In PCM/S: Q = 10 in the series representation(6.10). NSR ≈ 18.24% at T = 1.

truncation Q = 10 in the series representation (6.10) as a reference. First, Figure

6.33 shows that 1D-ANOVA-FP (κ = 1) does not compute the second moment E[u2]

as accurate as the 2D-ANOVA-FP (κ = 2), comparing to the E[u2] computed from

PCM/S (on the left). Second, we observe the growth of difference between ANOVA

and PCM/S versus time is slow. The error of 1D-ANOVA-FP and 2D-ANOVA-FP

mainly come from the initial condition by MC/S.

In Figure 6.34, we show the evolution of marginal distributions pi(xi, t), i = 1, ..., d

computed from the 1D-ANOVA-FP in Equation (6.49). The Levy jump process in

the heat equation (6.1) diffuses the marginal distributions.

In Figure 6.35, we show the mean E[u] of the heat equation (6.1) at different

final time by PCM (Q = 10) and by solving 1D-ANOVA-FP equations. It shows

that 1D-ANOVA is enough to compute the mean accurately.

Page 201: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

178

0.6 0.8 1 1.2 1.40

5

10

15

20

u1

p 1(u1)

marginal density for u1(t)

0.6 0.8 1 1.2 1.40

5

10

15

20

marginal density for u2(t)

u2

p 2(u2)

0.5 1 1.50

5

10

15

u3

p 3(u3)

marginal density for u3(t)

0.5 1 1.50

5

10

15

u4

p 4(u4)

marginal density for u4(t)

0.6 0.8 1 1.2 1.40

5

10

15

20

u5

p 5(u5)

marginal density for u5(t)

0.2 0.4 0.6 0.8 1 1.2 1.40

5

10

15

20

u6

p 6(u6)

marginal density for u6(t)

0.6 0.8 1 1.2 1.40

5

10

15

20

u7

p 7(u7)

marginal density for u7(t)

0.6 0.8 1 1.2 1.40

5

10

15

20

u8

p 8(u8)

marginal density for u8(t)

0.4 0.6 0.8 1 1.2 1.40

5

10

15

20

u9p 9(u

9)

marginal density for u9(t)

0.5 1 1.50

5

10

15

20

u10

p 9(u10

)

marginal density for u10(t)

1D−ANOVA marginal distribution of each spatial modes (at t=0.6,0.7,0.8,0.9,1)process: c=1, _=0.5, h=10diffusion: µ = 1e−4initial condition from MC at T = 0.5, Q=10, dt=4e−3, s=1e4 samplesFokker−Planck equation of each marginal distribution : RK2 in time , the tempered fractionalderivative was computedby 2nd order FD scheme

Figure 6.34: Evolution of marginal distributions pi(xi, t) at final time t = 0.6, ..., 1. c = 1 ,α = 0.5, λ = 10, µ = 10−4. Initial condition from MC: t0 = 0.5, s = 104, dt = 4× 10−3 , Q = 10.1D-ANOVA-FP : RK2 with time step dt = 4×10−3, 30 elements with 4 GLL points on each element.

Page 202: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

179

0 0.2 0.4 0.6 0.8 1−5

0

5

10

15

x

t=0.6

E[u]

0 0.2 0.4 0.6 0.8 1−5

0

5

10

15t=0.7

E[u]

x

0 0.2 0.4 0.6 0.8 1−5

0

5

10

15t=0.8

x

E[u]

0 0.2 0.4 0.6 0.8 1−5

0

5

10

15t=0.9

x

E[u]

0 0.2 0.4 0.6 0.8 1−5

0

5

10

15t=1

x

E[u]

E[u(x,t)1DANOVA]E[u(x,t)PCM]

NSR = 18.24%

c=1, _=0.5, h=10, µ=1e−4

1D−ANOVA−FP: initial condition from MC, s=1e4,Q=10, dt=1e−4,30 elements, 4 GLL pts on eachel,RK2 w/ dt=4e−3

PCM: Q=10

Figure 6.35: Showing the mean E[u] at different final time by PCM (Q = 10) and by solving1D-ANOVA-FP equations. c = 1 , α = 0.5, λ = 10, µ = 1e−4. Initial condition from MC: s = 104,dt = 4−3 , Q = 10. 1D-ANOVA-FP : RK2 with dt = 4× 10−3, 30 elements with 4 GLL points oneach element.

Page 203: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

180

In Figure 6.36, we show the second moment E[u2] of the heat equation (6.1) at

different final time by PCM (Q = 10) and by solving 1D-ANOVA-FP equations. It

shows that 1D-ANOVA is not enough to compute the mean accurately.

0 0.2 0.4 0.6 0.8 10

50

100

150

x

E[u2 ]

t=0.6

0 0.2 0.4 0.6 0.8 10

50

100

150

x

E[u2 ]

t=0.7

0 0.2 0.4 0.6 0.8 10

20

40

60

80

100

120

x

E[u2 ]

t=0.8

0 0.2 0.4 0.6 0.8 10

20

40

60

80

100

120

x

E[u2 ]

t=0.9

0 0.2 0.4 0.6 0.8 10

20

40

60

80

100

120

x

E[u2 ]

t=1

E[u(x,t)21DANOVA]

E[u(x,t)2PCM]

NSR = 18.24%

c=1, _=0.5, h=10, µ=1e−4

1D−ANOVA−FP: initial condition from MC, s=1e4,Q=10, dt=1e−4,30 elements, 4 GLL pts on eachel,RK2 w/ dt=4e−3

PCM: Q=10

Figure 6.36: The mean E[u2] at different final time by PCM (Q = 10) and by solving 1D-ANOVA-FP equations. c = 1 , α = 0.5, λ = 10, µ = 1e − 4. Initial condition from MC: s = 104,dt = 4× 10−3 , Q = 10. 1D-ANOVA-FP : RK2 with dt = 4× 10−3, 30 elements with 4 GLL pointson each element.

In Figure 6.37, we show the second moment E[u2] of the heat equation (6.1) at

different final time by PCM (Q = 10) and by solving 2D-ANOVA-FP equations.

It shows that 2D-ANOVA-FP better than 1D-ANOVA-FP to compute the mean

accurately.

Page 204: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

181

0 0.2 0.4 0.6 0.8 10

50

100

150

x

E[u2 ]

t=0.6

0 0.2 0.4 0.6 0.8 10

50

100

150

x

E[u2 ]

t=0.7

0 0.2 0.4 0.6 0.8 10

50

100

150

x

E[u2 ]

t=0.8

0 0.2 0.4 0.6 0.8 10

50

100

150t=0.9

x

E[u2 ]

0 0.2 0.4 0.6 0.8 10

50

100

150

x

E[u2 ]

t=1

E[u2(x,t)2DANOVA]

E[u2(x,t)PCM]

NSR=15.16%

NSR=12.29%

NSR=10.17%

NSR=13.91%

c=1, _=0.5, h=10 , µ=1e−4

2D−ANOVA−FP:initial condition from MC, s=1e4,Q=10, dt=4e−35 elements w/ 4 GLL points on each el,RK2 w/ dt=4e−3

PCM : Q=10

Figure 6.37: The mean E[u2] at different final time by PCM (Q = 10) and by solving 2D-ANOVA-FP equations. c = 1 , α = 0.5, λ = 10, µ = 10−4. Initial condition from MC: s = 104,dt = 4× 10−3 , Q = 10. 2D-ANOVA-FP : RK2 with dt = 4× 10−3, 30 elements with 4 GLL pointson each element.

Page 205: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

182

Sensitivity index of two-dimensional modes in the ANOVA decomposition

of P (u1, u2, ..., u10)

In order to reduced the number 2D-ANOVA-FP equations (45 of them), we introduce

the sensitivity index here to be (criteria one)

Sij =E[xixj]∑d

m=1

∑dn=m+1 E[xmxn]

. (6.87)

We will compute this sensitivity index for the 45 pairs of E[xixj] from the MC data

as the initial condition in Figure 6.38. If some Sij is dominantly larger than others,

we will only run the 2D-ANOVA-FP pij(ui, uj, t) that has sensitivity index Sij above

a certain value .

We have another definition of sensitivity index to be (criteria two)

Sij =||E[xixj]fi(x)fj(x)||L2([0,1])∑d

m=1

∑dn=m+1 ||E[xmxn]fm(x)fn(x)||L2([0,1])

. (6.88)

We show the sensitivities indices with two different definitions in Equations (6.87)

and (6.88) in Figure 6.38. However, we do not observe any one pair of (i, j) to have

significantly larger sensitivity index than other pairs. This shows that all the 45

2D-ANOVA terms (pij(ui, uj, t)) must be considered. We introduce this procedure

because the sensitivity index will depend on the Levy measure of the 10-dimensional

Levy jump process. The example we computed in Figure 6.38 has a very isotropic

Levy measure, therefore the sensitivity index shows that each pair of pij(ui, uj) is

equally important.

Page 206: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

183

Figure 6.38: Left: sensitivity index defined in Equation (6.87) on each pair of (i, j), j ≥ i. Right:sensitivity index defined in Equation (6.88) on each pair of (i, j), j ≥ i. They are computed fromthe MC data at t0 = 0.5 with s = 104 samples.

6.9.4 Simulating the moment statistics by 2D-ANOVA-FP

with dimension d = 4, 6, 10, 14

Let us examine the error in E[u] and E[u2] simulated by 2D-ANOVA-FP (6.50) versus

the dimension d. We simulate Equation (6.1) driven by a d-dimensional jump process

with the Levy measure (6.9) by ANOVA decomposition of joint PDF P (u1, u2, ..., ud)

at effective dimension κ = 2 (2D-ANOVA-FP). We set up the parameters in a way

that the NSR defined in Equation (6.60) is almost the same for different dimensions

d = 4, 6, 10, 14. We will use E[u] and E[u2] computed from PCM/S with Q = 16

as our reference solution here. We define the L2 norm of difference in moments

computed from 2D-ANOVA-FP (6.50) and PCM/S as the following:

l2u1diff (t) =||E[u2D−ANOV A−FP (x, t;ω)]− E[uPCM(x, t;ω)]||L2([0,1])

||E[uPCM(x, t;ω)]||L2([0,1])

, (6.89)

Page 207: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

184

and

l2u2diff (t) =||E[u2

2D−ANOV A−FP (x, t;ω)]− E[u2PCM(x, t;ω)]||L2([0,1])

||E[u2PCM(x, t;ω)]||L2([0,1])

. (6.90)

The initial condition of 2D-ANOVA-FP (6.50) is simulated by MC/S with s = 104

samples up to the initial time t0 = 0.5. From t0, we use 2D-ANOVA-FP to simulate

E[u] and E[u2] up to final time T. Therefore, the initial condition for 2D-ANOVA-FP

already contains the sampling error form MC/S. In order to have a fair comparison

between different dimensions d, in Figure 6.39, we define the l2u2diff (t0 = 0.5) from

l2u2diff (T ) to define the error growth by the 2D-ANOVA-FP method as:

l2u1rel(T ; t0) = |l2u1diff (T )− l2u1diff (t0)|, (6.91)

and

l2u2rel(T ; t0) = |l2u2diff (T )− l2u2diff (t0)|. (6.92)

We compute the Equation (6.50) with the same resolution in time and space for

all the dimensions considered. In Figure 6.39 (left and middle), the reliability of

our 2D-ANOVA-FP method versus time to calculate the first two moments of the

solution of the diffusion equation is demonstrated by the fact that the error growths

l2u1rel(T ; t0) and l2u2rel(T ; t0) versus time are all within one order of magnitude

from t0 = 0.5 to T = 1 (with NSR ≈ 20%), except the l2u2rel(T ; t0) for d = 14.

In Figure 6.39 (right), the error growth l2u2rel(T = 1; t0) is 100 larger when d = 14

than d = 4, because 91 equations as Equation (6.50) are computed for d = 14 and

only 6 equations are computed for d = 4. At the same time, the CPU time for 2D-

ANOVA-FP when d = 14 is 100 longer than d = 2. If we compute the d-dimensional

FP equation (6.40), with M elements and q GLL points on each dimension, the cost

ratio for d = 14 over d = 2 will be (Mq)12. In Figure 6.39, where m = 5 and q = 4,

Page 208: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

185

0.6 0.7 0.8 0.9 1

10−5

10−4

T

l2u1

rel(T

;0.5

)

0.6 0.7 0.8 0.9 110−4

10−3

10−2

T

l2u2

rel(T

;0.5

)

4 6 8 10 12 1410

−4

10−2

d

l2u2rel(T=1; t0=0.5)

4D, h=8.26D, h=910D, h=1014D, h=11

4D, h=8.26D, h=910D, h=1014D, h=11

4 6 8 10 12 140

20

CPU time/hours

l2u2rel

(T=1; t0=0.5)

CPU time / hours

Figure 6.39: Error growth by 2D-ANOVA-FP in different dimension d: the error growthl2u1rel(T ; t0) in E[u] defined in Equation (6.91) versus final time T (left); the error growthl2u2rel(T ; t0) in E[u2] defined in Equation (6.92) versus T (middle); l2u1rel(T = 1; t0) andl2u2rel(T = 1; t0) versus dimension d (right). We consider the diffusion equation (6.1) drivenby a d-dimensional jump process with a Levy measure (6.9) computed by 2D-ANOVA-FP, andPCM/S. c = 1, α = 0.5, µ = 10−4 (left, middle, right). In Equation (6.49): 4t = 4 × 10−3 inRK2, M = 30 elements, q = 4 GLL points on each element. In Equation (6.50): 4t = 4× 10−3 inRK2, M = 5 elements on each direction, q2 = 16 GLL points on each element. Initial condition ofANOVA-FP: MC/S data at t0 = 0.5, s = 1×104, 4t = 4×10−3, and Q = 16. In PCM/S: Q = 16 inthe series representation (6.10). NSR ≈ 20.5% at T = 1 for all the dimensions d = 2, 4, 6, 10, 14, 18.These runs were done on Intel (R) Core (TM) i5-3470 CPU @ 3.20 GHz in Matlab.

this ratio will be 2012, much larger than 100.

6.10 Conclusions

In this chapter, we focused on computing the moment statistics of the stochastic

parabolic diffusion equation driven by a multi-dimensional infinity activity pure jump

Levy white noise with correlated components as in Equation (6.1). We approached

this problem by two probabilistic methods in uncertainty quantification (such as

MC/S and PCM/S) and a deterministic method (such as the generalized FP equa-

tion). We solve the moment statistics by two ways of describing the dependence

structure of components in the Levy process, such as LePage’s series representation

in Section 1.2.1 (where the d-dimensional TS process was taken as an example) and

Page 209: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

186

the Levy copula in Section 1.2.2 (where the Clayton family of Levy copula was taken

as an example). See Figure (6.9) as an overview for the scope of this paper.

In the MC/S method (probabilistic), we generalized the series representation into

d dimensions in Equation (6.10) (new) when the Levy process with a Levy measure in

Equation (6.9) is in LePage’s representation. We used the series representation (6.30)

to simulate the Levy process when the dependence structure was described by the

Clayton family of copulas. The SPDE (6.1) was decomposed into an SODE system

(6.37). We simulate the SODE system by the first-order Euler scheme to obtain the

moment statistics of the diffusion equation (6.1). In both description of dependence

structures, we achieved the s−1/2 convergence in computing the second moments: see

Figure (6.11) for the LePage’s representation and Figure (6.19) for Clayton copula’s

description of the dependence structure. Since the series representation of a multi-

dimensional Levy jump process often requires a large number of RVs to simulate the

sample paths in the MC/S method, it is reliable but costly in computing, although

MC/S can be parallelized in computing.

In the PCM/S method (probabilistic), we used the series representation (6.10)

for the Levy process described by LePage’s representation and we modified the repre-

sentation in Equation (6.30) into Equation (6.32) (new) in order to have independent

RVs in the series representation when the dependence structure among components

of the Levy process is described by the Clayton copula. The convergence in the sec-

ond moments of the solution for the diffusion equation (6.1) is more sensitive with

respect to the truncation order Q in the series representation than the number of

collocation points q for each RV, as shown in Figure 6.11 (LePage) and Figure 6.19

(Clayton). This means that the convergence in moment statistics is restricted by the

convergence in the series representation. The pros of using PCM/S is that it can be

parallelized (as MC/S) and for the same Levy process it usually converges faster in

Page 210: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

187

moment statistics than MC/S with respect to the number of sample points in the

random space.

In the deterministic method, we derived the generalized FP equation for the

joint PDF of the SODE system (6.37) as in Equation (6.40). We compute this

equation directly when the dimension d = 2 is low. The joint PDF simulated from

the generalized FP matched with the joint empirical histogram simulated from the

MC/S as in Figure (6.16) (LePage, d = 2) and in Figure (6.21) (Clayton copula,

d = 2), however MC/S is much slower than the deterministic method. When d = 2,

the moment statistics simulated by the generalized FP equations were compared

with the ones simulated by the PCM/S as in Figure (6.17) (LePage, d = 2) and

in Figure (6.22) (Clayton copula, d = 2). We observed that the growth of error

by the deterministic FP equation was slower than that from PCM/S. However, the

deterministic FP equation method suffers from the error in the initial condition, no

matter if the initial condition was approximated by the delta sequence (for the delta

function at t = 0) or obtained from the empirical histogram of MC/S simulation up

to time t0.

We demonstrated the accuracy of our three methods - MC/S, PCM/S, and the

generalized FP equation - by simulating a 10-dimensional problem in Section 4.

The s−1/2 convergence in the MC/S method is achieved as shown in Figure (6.23).

The Q-convergence in the PCM/S method was obtained in Figure (6.27). For the

deterministic method, instead of solving a 10-dimensional PDE in Equation (6.40) for

the joint PDF of the SODE system (6.37), we introduced the ANOVA decomposition

(6.45) to obtain the marginal distributions from Equations (6.49) (1D-ANOVA-FP)

and (6.50) (2D-ANOVA-FP), as far as the lower order of moments were concerned

for the diffusion equation (6.1). Therefore instead of solving one 10-dimensional

equation, we solved 1 0-dimensional equation, 10 one-dimensional PDEs for 1D-

Page 211: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

188

ANOVA-FP, and we added 45 two-dimensional PDEs for 2D-ANOVA-FP. In terms

of computing the mean for the diffusion equation, 1D-ANOVA-FP and 2D-ANOVA-

FP both differed only 10−4 from that computed from the PCM/S as shown in Figure

(6.32). For the second moments of the solution for the diffusion equation, 2D-

ANOVA-FP differed from the PCM/S much less than the 1D-ANOVA-FP as shown

in Figure (6.33). Both Figures (6.32) and (6.33) showed that the error from the

ANOVA-FP method grew slowly in time. In the future, this work, especially the

combination of ANOVA and FP equation, can be applied to real applications such as

mathematical finance (such as simulating the market index by correlated Levy jump

processes) and this work can also be tested by going into much higher dimensions

than d = 10. We also hope to work on nonlinear SPDEs driven by multi-dimensional

Levy noises and SPDEs driven by multiplicative multi-dimensional Levy noises.

Page 212: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

Chapter Seven

Summary and future work

Page 213: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

190

7.1 Summary

We summarize the content of this thesis in Figure 7.1.

Figure 7.1: Summary of thesis

We first developed an adaptive multi-element probabilistic collocation method

(ME-PCM) to solve the moment statistics for SPDEs driven by arbitrary discrete

random variables (RVs) with finite moments. The orthogonal polynomials in ME-

PCM were numerically constructed by five different methods. The adaptivity is

based on a local variance criterion. We applied our method to show the h-p con-

vergence from the example of a Korteweg-de Vries (KdV) equation subject to noises

represented by discrete and continuous RVs.

We considered, secondly, for nonlinear SPDEs driven by stochastic processes

that can be represented by discrete RVs with arbitrary measure of finite moment,

proposed an adaptive Wick-Malliavin (WM) expansion in terms of the Malliavin

Page 214: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

191

derivative of order Q to simplify the propagator of general Polynomial Chaos (gPC)

of order P (a system of deterministic equations for the coefficients of gPC) and

to control the error growth with respect to time. We applied the WM expansion

onto the simulation of the moment statistics for a stochastic reaction equation and

a Burgers equation, driven by multiple discrete RVs. We identified a significant

speed-up with respect to gPC in high dimensions from analyzing the computational

complexity of WM for the stochastic Burgers equation.

We, thirdly, developed new probabilistic (MC, PCM) and deterministic approaches

(generalized Fokker-Planck equation) for moment statistics of SPDEs with pure jump

tempered α-stable (TαS) Levy processes with compound Poisson approximation and

series representation to represent the TαS process by RVs. We applied our methods

to stochastic reaction-diffusion equations driven by a one-dimensional additive TαS

white noises, where the generalized Fokker-Planck (FP) equation happened to be a

tempered fractional PDE (TFPDE).

We, fourthly, extended our probabilistic and deterministic approaches onto SPDEs

driven by multi-dimensional Levy processes with dependent components, whose de-

pendence structure was described in two ways: LePage’s representation and Levy

copula. As an example, we applied our method to diffusion equations driven by

multi-dimensional Levy TαS processes, which can be decomposed into an SODE

system by the Galerkin projection. In a moderate dimension of 10, we used the

analysis of variance (ANOVA) decomposition to obtain marginal distribution of the

joint PDF of the SODE system, as far as the moment statistics of lower orders are

concerned.

Page 215: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

192

7.2 Future work

Lastly, we discuss a few ideas in uncertainty quantification (UQ) of SPDEs driven

by Levy jump processes built upon the work presented in this thesis.

• More dimensions: The first extension of our work can be done by going into

higher dimensions.

– With the concept of P − Q adaptivity developed in our work on WM

approximation for nonlinear SPDEs, we may consider a stochastic Burgers

equation driven by a larger number of RVs (for example, 100). In this case,

since the WM propagator will contain many equations, some adaptivity

criterion over time shall be developed.

– In our last project, we combined ANOVA (in the effective dimension of 2)

with the generalized FP (2D-ANOVA-FP). We may consider either higher

moments with higher effective dimensions in the ANOVA expansion or still

consider the second moments computed from the 2D-ANOVA-FP but with

the multi-dimensional Levy jump process in higher dimensions (such as 50

or 100). We have seen that LePage’s representation costs much less CPU

time than the Levy copula to describe the dependence structure. There-

fore, I will suggest to use LePage’s representation in higher-dimensional

computations. However, further investigation shall be done on how effec-

tive LePage’s representation is to describe the dependence structure.

– We know that for moderate and low dimensions, PCM is more efficient

than MC for SPDEs driven by Gaussian processes. However, no one

ever investigated such a comparison between PCM and MC along the

dimensionality, or at least for some specific equations. It has been done

Page 216: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

193

for continuous processes but not discrete ones.

• Other SPDEs: In our thesis, we considered stochastic KdV equations, stochas-

tic reaction-diffusion equations, and stochastic Burgers equations.

– A natural extension will be to simulate the stochastic Euler equations and

the stochastic Navier Stokes equations driven by Levy processes.

– In our last part of the work we solved a stochastic reaction diffusion driven

by an additive multi-dimensional Levy TαS process. A natural extension

is to deal with a multiplicative Levy TαS process.

– We solved a linear stochastic reaction diffusion. Another natural extension

is to solve a nonlinear SPDE driven by a multiplicative Levy TαS noise.

• Other Levy jump processes: We mostly considered the TαS as an example

of Levy jump processes because we want to connect the work to the tempered

fractional PDEs. However, the range of Levy pure jump processes (with infinite

activity) is much larger than this.

– For one-dimensional Levy TαS processes, the first natural extension is

to make the Levy measure asymmetric. This means to take the Levy

measure to be ν = c−|x|1+α− e

−λ−|x|Ix<0 + c+|x|1+α+ e

−λ+|x|Ix>0, when α− 6= α+,

and λ− 6= λ+.

– For multi-dimensional Levy TαS processes, we have considered in the

LePage’s representation of the Levy measure to have TαS distribution

for the size of jumps and uniform distribution for the direction of jumps.

Therefore, a natural and more practical extension is to decrease the level

of isometry, in other words, by considering non-uniform distributions for

the direction of jumps. It will be nice if the level of isometry can be

parameterized, and the convergence rate of series representation can be

Page 217: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

194

investigated when the isometry is less and less. In our past experience

from our numerical experiments, we learned that the convergence in the

series representation (for multi-dimensional Levy process) is a main con-

strain of the convergence rate in the moment statistics of the SPDE that

we were solving.

– We know that Gamma process is the limiting case of a Levy TαS process

when α (in the Levy measure) goes to 0. Work can be done to compare

the asymptotic behavior of solution for an SPDE driven by a Levy TαS

process when α→ 0 and driven by a Gamma process.

– Another extension of Levy TαS processes is the generalized hyperbolic

model that considers a marginal distribution of the Levy process slightly

more complicated than the marginal distribution for a Levy TαS subordi-

nator with α = 1/2 (an inverse Gaussian process). The marginal distribu-

tion of a TαS subordinator with α = 1/2 is p(x) = c(χ, ξ)x−3/2e−12

(χx−ξ/x)Ix>0.

The generalized hyperbolic model has a marginal distrution as p(x) =

c(λ, χ, ξ)xλ−1e−12

(χx−ξ/x)Ix>0. This is a process with infinite variance and

this process has exponential tails for the Levy measure and the marginal

distribution. When λ→ 1/2, it goes back to an inverse Gaussian process.

• Other UQ methods: Mulit-level MC can be implemented on SPDEs driven

by Levy jump processes [39, 40] and be compared with PCM and the general-

ized FP equations.

• Application in climate modeling: The application of SPDEs driven by

Levy jump processes can be effective in the climate modeling and mathematical

finance.

– Problem description

The Chafee-Infante (C-I) equation is a nonlinear reaction-diffusion equa-

Page 218: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

195

tion describing equator-to-pole heat transfer, heat absorption and diffu-

sion, as a prototype of the Energy Balance Model (EBM):

dXt = [∂2Xt

∂x2− U ′(Xt(x))]dt+ εdLt(x), x ∈ [0, 1], (7.1)

where Xt(0) = Xt(1) = 0, X0(x) = f(x). Here U(u) = λ(u4/4 − u2/2).

The human activities are modeled by multi-dimensional Levy jump pro-

cesses, whose dependence structure between components is described by

Levy copulas. Theoretically, the asymptotic transition time between the

two stable states was studied by Peter Imkeller [79].

– Goals

∗ Simulating the moment statistics of the C-I equation by gPC or PCM

(as spectral methods): Lt can be represented by independent RVs

from a series representation (similar to Karhunen-Loeve expansion

for Gaussian processes).

∗ We can decompose the C-I equation into a system of SODEs driven

by correlated Levy processes. The joint probabilistic density function

(PDF) of the SODE system can be simulated through a generalized

FP equation. With this joint PDF, moment statistics of the solution

can be computed.

∗ Simulating the statistics of the transition time between stable states

by parareal algorithms.

– Difficulties

∗ The SODE system can be highly coupled and nonlinear. I would

like to develop a parameterized hierarchical approximation procedure

Page 219: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

196

(similar to the WM approximation in our thesis) to linearize the sys-

tem.

∗ If the stable Levy process has a large number of components, the

SODE system can be very large. If, at the same time, the Levy

process has less isometry in the Levy measure, this stochastic system

is high-dimensional.

• Application in mathematical finance:

– Problem description

We consider standard European options as a risk-neutral model in a in-

complete market for stock price with a Levy process Lt ∈ Rd:

St = S0eµt+Lt . (7.2)

I am interested in computing the CGMY model as a pure jump model.

The marginal law of the i-th component for the Levy measure of Lt is a

tempered α-stable distribution. We may consider the Levy measure of Lt

to be isotropic by LePage’s radial decomposition or to be anisotropic by

the Clayton family of Levy copulas.

– Goals

∗ Simulate the option pricing Ct = C(t, St) from partial integro-differential

equations (PIDEs)

· The PIDE in CGMY models will be a TFPDE.

∗ Simulate the self-financing hedging strategy (φt ∈ Rd) (portfolio).

· We choose a pricing rule given by a risk neutral measure Q.

· How does the hedging portfolio φt depend on the dependence

Page 220: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

197

structure between components of the d-dimensional Levy mea-

sure?

∗ Simulation of the hedging error (risk).

· Compute the moment statistics of hedging error by FEM methods

and the FP equation for the hedging error.

Page 221: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

Bibliography

[1] R. A. Adams, Sobolev spaces, Boston, MA: Academic press (1975).

[2] S. Albeverioa, J.-L. Wua, T.-S. Zhang, Stochastic processes and theirapplications, 74 (1998), pp. 21–36.

[3] P. Arbenz, Bayesian copulae distributions, with application to op-erational risk management–some comments, Methodology and Com-puting in Applied Probability 15 (1) (2013), pp. 105–108.

[4] G. Arfken, Mathematical Methods for Physicists, 3rd ed. Orlando, FL, Aca-demic Press, 1985.

[5] P.L. Artes, J.S. Dehesa, A. Martinez-Finkelshtein, J. Sanchez-Ruiz, Linearization and connection coefficients for hypergeometric-type polyno-mials, Journal of Computational and Applied Mathematics 99, (1998), pp. 5–26.

[6] R. Askey, J. Wilson, Some basic hypergeometric polynomials that generalizeJacobi polynomials, Memoirs Amer. Math. Soc., AMS, Providence, RI,(1985),pp. 319.

[7] S. Asmussen, J. Rosınski, Approximations of small jumps of Levy processeswith a view towards simulation, J. Appl. Probab., 38 (2001), pp. 482–493.

[8] K. E. Atkinson, An introduction to numerical analysis, John Wiley and Sons,inc, (1989).

[9] I. Babuska, F. Nobile, R. Tempone, A stochastic collocation method forelliptic partial differential equations with random input data, SIAM Review, 52(2) (2010), pp. 317–355.

[10] I. Babuska, R. Tempone, and G. E. Zouraris, Galerkin finite element ap-proximations of stochastic elliptic differential equations, SIAM J. Numer. Anal-ysis, 42(2) (2004), pp. 800–825.

[11] B. Baeumer, M. Meerschaert, Tempered stable Levy motion and transientsuper-diffusion, J. Comput. Appl. Math., 233 (10) (2010), pp. 2438–2448.

198

Page 222: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

199

[12] V. Bally, E. Pardoux, Malliavin calculus for white noise drivenSPDEs, Potential Anal., 9 (1) (1998), pp. 27–64.

[13] O. E. Barndorff-Nielsen and P. Blaesild, Hyperbolic distributions, En-cyclopedia of Statisti-cal Sciences (S. Kotz, C. B. Read, and D. L. Banks, Eds.),Wiley, NewYork ,Vol. 3 (1987), pp. 700–707.

[14] O.E. Barndorff-Nielsen, N. Shephard, Non-Gaussian Ornstein-Uhlenbeck-based models and some of their uses in financial economics,J.R.Statist. Soc. B 63 (2001), pp. 1–42.

[15] O.E. Barndorff-Nielsen, N. Shephard, Normal modified stable processes,Theory Probab. Math. Statist. 65 (2002), pp. 1–20.

[16] O. Barndorff-Nielsen, Processes of normal inverse Gaussian type,Finance Stoch., 2 (1998), pp. 41–68.

[17] G.K. Batchelor, A.A. Townsend, The nature of turbulent motion at largewave-numbers, Proc. Roy. Soc. Lond. A, 199 (1949), pp. 238–255.

[18] D. Bell, The Malliavin calculus, Dover, (2007).

[19] V.V. Beloshapkin, A.A. Cherinkov, M.Ya. Natenzon, B.A. Petro-vichev, R.Z. Sagdeev, G.M. Zaslavsky, Chaotic streamlines in pre-turbulent states, Nature, 337 (1989), pp. 133–137.

[20] F.E. Benth, A. Løkka, Anticipative calculus for Levy processes and stochas-tic differential equations, Stochastics and Stochastics Reports, 76 (2004),pp. 191–211.

[21] J. Bertoin, Levy Processes, Cambridge University Press , 1996.

[22] M.L. Bianchi, S.T. Rachev, Y.S. Kim, F.J. Fabozzi, Tempered StableDistributions and Processes in Finance: Numerical Analysis, Mathematical andStatistical Methods for Actuarial Sciences and Finance, Springer, 2010.

[23] M. Bieri, C. Schwab, Sparse high order FEM for elliptic sPDEs, Tech. Report22, ETH, Switzerland, (2008).

[24] D. Boley, G. H. Golub, A survey of matrix inverse eigenvalue problems, InverseProblems, 3 (1987), pp. 595–622.

[25] L. Bondesson, On simulation from infinitely divisible distributions, Adv. Appl.Prob. 14, (1982), pp. 855–869.

[26] S. Boyarchenko, S. Levendorskii, Option pricing for truncated Levy pro-cesses, Inter. J. Theor. Appl. Fin. 3 (2000), pp. 549–552.

[27] R. Cameron, W. Martin, The orthogonal development of nonlinear functionalsin series of Fourier-Hermite functionals, Ann. Math., 48 (1947), pp. 385.

[28] O. Cardoso, P. TabelingAnomalous diffusion in a linear system of vortices,Eur. J. Mech. B/Fluids, 8 (1989), pp. 459–470.

Page 223: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

200

[29] P. Carr, H. Geman, D.B. Madan, M. Yor, The fine structure of assetreturns: An empirical investigation, J. Business 75 (2002), pp. 303–325.

[30] P. Carr, H. Geman, D.B. Madan, M. Yor, Stochastic volatility for Levyprocesses, Math. Finance 13 (2003), pp. 345–382.

[31] A.S. Chaves, A fractional diffusion equation to describe Levy flights, Phys.Letters A, 239 (1998), pp. 13–16.

[32] A.A. Chernikov, R.Z. Sagdeev, D.A. Usikov, G.M. Zaslavsky, TheHamiltonian method for quasicrystal symmetry, Physics Letters A, 125 (1987),pp. 101–106.

[33] T. S. Chihara, An Introduction to Orthogonal Polynomials, Mathematics andits applications 13, New York: Gordon and Breach Science Publishers (1978).

[34] A. Compte, Stochastic foundations of fractional dynamics, Phys. Rev. E, 53(1996), pp. 4191–4193.

[35] R. Cont, P. Tankov, Financial Modelling with Jump Processes, Chapman &Hall/CRC Press, 2004.

[36] D. Coppersmith, S. Winograd, Matrix multiplication via arithmetic progres-sions, Journal of Symbolic Computation 9 (3) (1990), pp. 251.

[37] J. Dalibard, Y. Castin, Wave-function approach to dissipative processes in quan-tum optics, Phys. Rev. Lett. 68 (1992), pp. 580–583.

[38] S.I. Denisov, W. Horsthemke, P. Hanggi, Generalized Fokker-Planckequation: Derivation and exact solutions, Eur. Phys. J. B, 68 (2009), pp. 567–575.

[39] S. Dereich, Multilevel Monte Carlo algorithms for Levy-driven SDEs withGaussian correction, Ann. Appl. Probab., Volume 21, Number 1 (2011), pp. 283–311.

[40] S. Dereich, F. Heidenreich, A multilevel Monte Carlo algorithm for Levy-driven stochastic differential equations, Stochastic Processes and their Applica-tions, 121(7) (2011), pp. 1565–1587.

[41] L. Devroye, Non-Uniform Random Variate Generation, Springer: New York,1986.

[42] R. L. Dobrushin, R. A. Minlos, Polynomials in linear random functions,Russian Math. Surveys , 32 (1977) pp. 71–127 Uspekhi Mat. Nauk , 32, (1977)pp. 67–122.

[43] S. Emmer, C. Kluppelberg, Optimal portfolios when stock prices follow anexponential Levy process, Finance and Stochastics, 8 (2004), pp. 17–44.

[44] L. C. Evans, Partial differential equations, AMS-Chelsea (1998).

[45] R. Erban, S. J. Chapman, Stochastic modeling of reaction diffusion processes:algorithms for bimolecular reactions, Physical Biology, 6/4, 046001, (2009).

Page 224: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

201

[46] A. Erdelyi, W. Magnus, F. Oberhettinger, F.G. Tricomi, Highertranscendental functions, Vol.2., New York: Krieger, (1981), pp. 226.

[47] K.T. Fang, Elliptically contoured distributions, Encyclopedia of Statistical Sci-ences (S. Kotz, C. B. Read, and D. L. Banks, Eds.), Wiley, NewYork, Vol. 1(1997), pp. 212–218.

[48] J. Favard, Sur les polynomes de Tchebicheff, C.R.Acad. Sci., Paris (in French)200 (1935): pp. 2052–2053.

[49] T.S. Ferguson, M.J. Klass, A representation of independent increment pro-cesses without Gaussian components, Ann. Math. Statist., 43 (1972), pp. 1634–1643.

[50] H. J. Fischer, On the condition of orthogonal polynomials via modified mo-ments, Z. Anal. Anwendungen, 15 (1996), pp. 1–18.

[51] H. J. Fischer, On generating orthogonal polynomials for discrete measures,Z. Anal. Anwendungen, 17 (1998), pp. 183–205.

[52] H.C. Fogedby, Langevin equations for continuous time Levy flights, Phys.Rev. E, 50 (1994), pp. 1657–1660.

[53] H.C. Fogedby,Levy flights in quenched random force fields, Phys. Rev. E, 58(1998), pp. 1690.

[54] J. Foo, X. Wan, G. E. Karniadakis, A multi-element probabilistic collo-cation method for PDEs with parametric uncertainty: error analysis and appli-cations, Journal of Computational Physics 227 (2008), pp. 9572–9595.

[55] J.Y. Foo, G.E. Karniadakis, Multi-element probabilistic collocation in high di-mensions, J. Comput. Phys., 229 (2009), pp. 1536–1557.

[56] G. E. Forsythe, Generation and use of orthogonal polynomials for data-fittingwith a digital computer, J. Soc. Indust. Appl. Math., 5 (1957), pp. 74–88.

[57] P. Frauenfelder, C. Schwab, and R. A. Todor, Finite elements for ellip-tic problems with stochastic coefficients, Comput. Methods Appl. Mech. En-grg.,194(2005), pp. 205–228.

[58] C.W. Gardiner, Handbook of Stochastic Methods, Springer-Verlag, 2nd edn.,1990.

[59] W. Gautschi, Construction of Gauss-Christoffel quadrature formulas, Math.Comp., 22 (102) (1968), pp. 251–270.

[60] W. Gautschi, On generating orthogonal polynomials, SIAM J. Sci. Stat.Comp., 3 (1982), no.3, pp. 289–317.

[61] A. Genz, A package for testing multiple integration subroutines, Numericalintegration: Recent developments, software and applications (1987), pp. 337–340.

Page 225: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

202

[62] H.U. Gerber, S.W. Elias, Option pricing by Esscher transforms, Transac-tions of the Society of Actuaries, 46 (1994), pp. 99–191.

[63] R. Ghanem and Kruger R., Numerical solution of spectral stochastic finiteelement systems, Computer Meth. Appl. Mech. Engrg., 129 (1996), pp. 289–303.

[64] G. H. Golub, C. Van Loan, Matrix Computations, Johns Hopkins Univ. Press,(1983).

[65] G. H. Golub, J. H. Welsch, Calculation of Gauss quadrature rules, Mathematicsof Computation 23 (106), (1969), pp. 221–230.

[66] H. Grabert, Projection Operator Techniques in Nonequilibrium Statistical Me-chanics, Springer Tracts in Modern Physics 95. Berlin: Springer-Verlag, 1982.

[67] M. Griebel, Sparse grids and related approximation schemes for higher di-mensional problems, Foundations of Computational Mathematics (FoCM05),Santander, L. Pardo, A. Pinkus, E. Suli, and M. Todd, eds., Cambridge Uni-versity Press (2006), pp. 106–161.

[68] G. Lin, L. Grinberg, G.E. Karniadakis, Numerical studies of the stochasticKorteweg-de Vries equation, J. Comput. Phys., 213/2 (2006), pp. 676–703.

[69] M. Grothaus, Y.G. Kondratiev and G.F. Us, Wick calculus for regulargeneralized functions, Oper. Stoch. Equ. 7, (1999), pp. 263–290.

[70] E. J. Gumbel,Distributions des valeurs extremes en plusieurs dimensions, Pub-lications de l’Institut de Statistique de l’Universite de Paris, 9 (1960), pp. 171–173.

[71] E. J. Gumbel,Bivariate logistic distributions, J. Am. Statist. Assoc., 56 (1961),pp. 335–349.

[72] F. Hayot, Levy walk in lattice-gas hydrodynamics, Physical Review A, 43(1991), pp. 806–810.

[73] J.S. Hesthaven, S. Gottlieb, D. Gottlieb, Spectral Methods for Time-dependent Problems, New York: Cambridge University Press, 2007.

[74] T. Hida and N. Ikeda, Analysis on Hilbert space with reproducing kernelarising from multiple Wiener integral, Proc. Fifth Berkeley Symp. on Math.Statist. and Prob. 2, (1967), pp. 117–143.

[75] N. Hilber, O. Reichmann, Ch. Schwab, Ch. Winter, ComputationalMethods for Quantitative Finance: Finite Element Methods for Derivative Pric-ing, Springer Finance, 2013.

[76] H. Holden, B. Oksendal, J. Uboe and T. Zhang, Stochastic partialdifferential equations, second edition, (2010), Springer.

[77] W. Horsthemke, R. Lefever, Noise-induced Transitions: Theory and Ap-plications in Physics, Chemistry, and Biology, 2nd ed. Springer Series in Syn-ergetics, Springer, 2006.

Page 226: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

203

[78] N. Ikeda, S. Watanabe, Stochastic Differential Equations and Diffusion Pro-cesses, North Holland Publ. Co., 1981.

[79] P. Imkeller, Energy balance models - viewed from stochastic dynamics,Stochastic climate models, Basel: Birkhuser. Prog. Probab., 49 (2001), pp. 213–240.

[80] K. Ito, Stochastic differential equations in a differentiable manifold, NagoyaMath. J., 1 (1950), pp. 35–47.

[81] K. Ito, Stochastic integral, Proc. Imperial Acad., 20 (1944), pp. 519–524.

[82] K. Ito, Spectral type of the shift transformation of differential processes andstationary increments, Trans. Am. Math. Soc., 81 (1956), pp. 253–263.

[83] S. Janson, Gaussian Hilbert spaces, Cambridge University Press.

[84] D. R. Jensen, Multivariate distributions, Encyclopedia of Statistical Sciences(S. Kotz,C. B. Read, and D. L. Banks, Eds.), Wiley, New York, Vol. 6 (1985),pp. 4355.

[85] S. Jespersen, R. Metzler, H.C. Fogedby, Levy flights in external forcefields: Langevin and fractional Fokker–Planck equations, and their solutions,Phys. Rev. E, 59 (1999), pp. 2736–2745.

[86] H. Joe, Parametric families of multivariate distributions with given margins,J. Multivariate Anal. 46 (1993), pp. 262–282.

[87] H. Joe, Multivariate Models and Dependence Concepts, Chapman & Hall, Lon-don, 1997.

[88] L. Johnson and S. Kotz, Distributions in Statistics: Continuous Multivari-ate Distributions, Wiley, New York, 1972.

[89] C. Jordan, Calculus of finite differences, 3rd ed., New York: Chelsea, (1965),pp. 473.

[90] S. Kaligotla and S.V. Lototsky, Wick product in the stochastic Burgersequation: a curse or a cure?, Asymptotic Analysis 75, (2011), pp. 145–168.

[91] J. Kallsen, P. Tankov, Characterization of dependence of multidimensionalLevy processes using Levy copulas, Journal of Multivariate Analysis, 97 (2006),pp. 1551–1572.

[92] K. Karhunen, Uber lineare methoden in der wahrscheinlichkeitsrechnung,Ann. Acad. Sci. Fennicae. Ser. A.I. Math.-Phys. 37, (1947), pp. 1–79.

[93] G. E. Karniadakis, S. Sherwin, Spectral/hp element methods for computationalfluid dynamics, Oxford University Press, second edition, (2005), pp. 597–598.

[94] A.Y. Khintchine, Zur Theorie der unbeschrankt teilbaren Verteilungsgesetze,Mat. Sbornik, 2 (1937), pp. 79–119.

Page 227: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

204

[95] D. Kim, D. Stanton and J. Zeng, The combinatorics of the Al-Salam-Chihara q-Charlier polynomials, Seminaire Lotharingien de Combinatoire 54,(2006), Article B54i.

[96] A.N. Kolmogorov, Dissipation of energy in the locally isotropic turbulence,Akad. Nauk SSSR, 31 (1941), pp. 538–540.

[97] I. Koponen, Analytic approach to the problem of convergence of truncatedLevy flights towards the Gaussian stochastic process, Phys. Rev. E 52 (1995),pp. 1197-1199.

[98] S. Kou, A jump-diffusion model for option pricing, Management Science, 48(2002), pp. 1086–1101.

[99] S. Kusuoka and D. Stroock, Applications of Malliavin Calculus I, Stochas-tic Analysis, Proceedings Taniguchi International Symposium Katata and Kyoto1982, (1981), pp. 271–306.

[100] S. Kusuoka and D. Stroock, Applications of Malliavin Calculus II, J.Faculty Sci. Uni. Tokyo Sect. 1A Math., 32, (1985), pp. 1–76.

[101] S. Kusuoka and D. Stroock, Applications of Malliavin Calculus III, J.Faculty Sci. Uni. Tokyo Sect. 1A Math., 34, (1987), pp. 391–442.

[102] A. Lanconelli and L. Sportelli, A connection between the PoissonianWick product and the discrete convolution, Communications on Stochastic Anal-ysis 5(4), pp. 689–699.

[103] P. Langevin, Sur la thorie de mouvement Brownien, C.R. Acad. Sci. Paris,146 (1908), pp. 530–533.

[104] R. LePage, Multidimensional innitely divisible variables and processes. II,Lecture Notes Math. 860 Springer-Verlag, (1980), pp. 279–284.

[105] C. Lia, W. Deng, High order schemes for the tempered fractional diffusionequations, Advances in Computational Mathematics, submitted.

[106] D. Linders, W. Schoutens, Basket option pricing and implied correlationin a Levy copula model, Research report, AFI-1494, FEB KU Leuven, (2014).

[107] M. Loeve, Probability theory, Vol.II, 4th ed., Graduate texts in Mathematics46, Springer-Verlag, (1978).

[108] A. Løkka, B. Øksendal, F. Proske, Stochastic partial differential equa-tions driven by Levy space-time white noise, Annals Appl. Probab., 14 (2004),pp. 1506–1528.

[109] A. Løkka, F. Proske, Infinite dimensional analysis of pure jump Levy pro-cesses on the Poisson space, Math. Scand., 98 (2006), pp. 237–261.

[110] S.V. Lototsky, B.L. Rozovskii, and D. Selesi, On generalized Malliavincalculus, Stochastic Processes and their Applications 122(3), (2012), pp. 808–843.

Page 228: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

205

[111] D. Lucor, D. Xiu, G. Karniadakis, Spectral representations of uncertaintyin simulations: Agorithms and applications, ICOSAHOM-01, Uppsala Sweden,June 11-15, 2001.

[112] D. Madan, P. Carr, E. Chang, The variance gamma process and optionpricing, European Finance Review, 2 (1998), pp. 79–105.

[113] P. Malliavin and A. Thalmaier, Stochastic Calculus of Variations inMathematical Finance, Springer , (1987).

[114] B.B. Mandelbrot, The Fractal Geometry of Nature, W. H. Freeman andCompany, San Francisco, 1982.

[115] B.B. Mandelbrot, Intermittent turbulence in self-similar cascades: diver-gence of high moments and dimension of the carrier, J. Fluid Mech., 62 (1974),pp. 331–358.

[116] B. Mandelbrot, The variation of certain speculative prices, The Journal ofBusiness, 36 (4) (1963), pp. 394–419.

[117] P. Manneville, Y. Pomeau, Intermittent transition to turbulence in dissi-pative dynamical systems, Comm. Math. Phys., 74 (1980), pp. 189–197.

[118] R.N. Mantegna, H.E. Stanley, Stochastic process with ultraslow conver-gence to a Gaussian: The truncated Levy flight, Phys. Rev. Lett. 73 (1994),pp. 2946–2949.

[119] K. V. Mardia, Families of Bivariate Distributions, Griffin, London, 1970.

[120] F.J. Massey, The Kolmogorov-Smirnov test for goodness of fit, Journal of theAmerican Statistical Association, Vol. 46, 253 (1951), pp. 68–78.

[121] M.M. Meerschaert, C. Tadjeran, Finite diference approximations forfractional advection-dispersion flow equations, Journal of Computational andApplied Mathematics 172 (2004), pp. 65–77.

[122] M.M. Meerschaert, A. Sikorskii, Stochastic Models for Fractional Cal-culus, De Gruyter Studies in Mathematics Vol. 43, 2012.

[123] R. Mikulevicius and B.L. Rozovskii, On unbiased stochastic Navier-Stokes equations, Probab. Theory Relat. Fields, (2011), pp. 1–48.

[124] A.S. Monin, A.M. Yaglom, Statistical Fluid Mechanics: Mechanics of Tur-bulence, MIT Press, Cambridge, Vols. I and 2, 1971 and 1975.

[125] M. E. Muller, A note on a method for generating points uniformly on N-dimensional spheres, Comm. Assoc. Comput. Mach., 2 (1959), pp. 19–20.

[126] R. Nelsen, An introduction to copulas, Springer, New York, 2006.

[127] P. G. Nevai, Orthogonal polynomials, Mem. Amer. Math. Soc. 18 (1979), No.213.

Page 229: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

206

[128] E. Novak, K. Ritter, High dimensional integration of smooth functions overcubes, Numer. Math., 75 (1996), pp. 79–97.

[129] E. Novak, K. Ritter, Simple cubature formulas with high polynomial exactness,Constructive Approx., 15 (1999), pp. 49–522.

[130] E.A. Novikov, Infinitely divisible distributions in turbulence, Phys. Rev. E50 (1994), pp. 3303–3305.

[131] D. Nualart, The Malliavin calculus and related topics , (Second edition ed.).Springer-Verlag., (2006).

[132] D. Nualart, Malliavin calculus and its applications (CBMS regional confer-ence series in mathematics), Vol.110, American Mathematical Society (2009).

[133] G.D. Nunno, B. Øksendal, F. Proske, Malliavin Calculus for Levy Pro-cesses with Applications to Finance, Springer, 2009.

[134] G.D. Nunno, B. Øksendal, F. Proske, White noise analysis for Levyprocesses, J. Funct. Anal., 206 (2004), pp. 109–148.

[135] E.A. Novikov, Infinitely divisible distributions in turbulence, Phys. Rev. E,50 (1994), pp. R3303–R3305.

[136] B.K. Oksendal, Stochastic Differential Equations : an Introduction withApplications, Springer, 6th Edition, 2003.

[137] B. Øksendal, Stochastic partial differential equations driven by multi-parameter white noise of Levy processes, Quart. Appl. Math, 66 (2008), pp. 521–537.

[138] B. Øksendal, F. Proske, White noise of Poisson random measure, Poten-tial Analysis, 21 (2004), pp. 375–403.

[139] S. Oladyshkin, W. Nowak, Data-driven uncertainty quantification using thearbitrary polynomial chaos expansion, Reliability Engineering & System Safety,106 (2012), pp. 179–190.

[140] E. Pardoux, T. Zhang, Absolute continuity for the law of the solution of aparabolic SPDE, J. Funct. Anal., 112 (1993), pp. 447–458.

[141] S. B. Park, J.-H. Kim,Integral evaluation of the linearization coefficients ofthe product of two Legendre polynomials, J. Appl. Math. and Computing Vol.20, (2006), No. 1 - 2, pp. 623–635.

[142] E. Platen, N. Bruti-Liberati, Numerical Solutions of SDEs with Jumpsin Finance, Springer, 2010.

[143] I. Podlubny, Fractional Differential Equations, Academic Press, New York,1999.

[144] J. Poirot, P. Tankov, Monte Carlo option pricing for tempered stable(CGMY) processes, Asia-Pacific Financial Markets 13, 4 (2006), pp. 327–344.

Page 230: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

207

[145] Y. Pomeau, A. Pumir, W.R. Young., Transient effects in advection-diffusion of impurities, Comptes Rendus De L Academie Des Sciences SerieIi., 306 (1988), pp. 741–746.

[146] Y. Pomeau and P. Manneville, Intermittent transition to turbulence indissipative dynamical systems, Commun. Math. Phys., 74 (1980), pp. 189–197.

[147] A.L. Porta, G.A. Voth, A.M. Crawford, J. Alexander, E. Boden-schatz, Fluid particle accelerations in fully developed turbulence, Nature, 409(2001), pp. 1017–1019.

[148] K. Prause, The Generalized Hyperbolic Model: Estimation, Financial Deriva-tives, and Risk Measures, Dissertation Universitat Freiburg i. Br., 1999.

[149] P.E. Protter, Stochastic Integration and Differential Equations, Springer,New York, Second Edition, 2005.

[150] Q. I. Rahman, G. Schmeisser, Analytic theory of polynomials, Londonmathematical society monographs, New series 26, Oxford: Oxford universitypress (1935), pp. 15–16.

[151] H. Risken, The Fokker-Planck Equation, Springer-Verlag, Berlin, 2ndedn.,1989.

[152] S. Roman, The Poisson-Charlier polynomials, The umbral calculus, NewYork: Academic Press, (1984), pp. 119–122.

[153] J. Rosınski, Series representations of Levy processes from the perspective ofpoint processes in: Levy Processes - Theory and Applications, O. E. Barndorff-Nielsen, T. Mikosch and S. I. Resnick (Eds.), Birkhauser, Boston, (2001),pp. 401–415.

[154] J. Rosınski, On series representations of infinitely divisible random vectors,Ann. Probab., 18 (1990), pp. 405–430.

[155] J. Rosınski, Series representations of infinitely divisible random vectors anda generalized shot noise in Banach spaces, University of North Carolina Centerfor Stochastic Processes, Technical Report No. 195, (1987).

[156] J. Rosınski, Tempering stable processes, Stoch. Proc. Appl., (2007), pp. 117.

[157] K. Sato, Levy Processes and Infinitely Divisible Distributions, CambridgeUniversity Press, Cambridge, 1999.

[158] D. Schertzer, M. Larcheveque, J. Duan, V. Yanovsky, S. Lovejoy,Fractional Fokker?Planck equation for nonlinear stochastic differential equationsdriven by non-Gaussian Levy stable noises, J. Math. Phys., 42 (2001), pp. 200–212.

[159] M.F. Shlesinger, J. Klafter, B.J. West, Levy walks with applicationsto turbulence and chaos, Physica, 140A (1986), pp. 212–218.

[160] M.F. Shlesinger, B.J. West, J. Klafter, Levy dynamics of enhanceddiffusion: Application to turbulence, Phys. Rev. Lett., 58 (1987), pp. 1100–1103.

Page 231: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

208

[161] M.F. Shlesinger, G.M. Zaslavsky, J. Klafter, Strange kinetics, Na-ture, 363 (1993), pp. 31–37.

[162] J. Shohat, Sur les polynomes orthogonaux generalises, C.R. Acad. Sci., Paris,207 (1938): pp. 556–558.

[163] A. Sklar, Fonctions de repartition a n dimensions et leurs marges, Publ. Inst.Statist. Univ. Paris, 8 (1959), pp. 229–231.

[164] S. Smolyak, Quadrature and interpolation formulas for tensor products of cer-tain classes of functions, Soviet Math. Dokl., 4 (1963), pp. 240–243.

[165] T. Solomon, E. Weeks, H. Swinney, Chaotic advection in a two-dimensional flow: Levy flights and anomalous diffusion, Physica D, 76 (1994),pp .70–84.

[166] T. Solomon, E. Weeks, H. Swinney, Observation of anomalous diffusionand Levy flights in a two dimensional rotating flow, Physical Review Letters,71 (1993), pp. 3975–3979.

[167] R.L. Stratonovich, Topics in the Theory of Random Noise, Vol. 1, Gordonand Breach, New York, 1963.

[168] X. Sun, J. Duan, Fokker-Planck equations for nonlinear dynamical systemsdriven by non-Gaussian Levy processes. J. Math. Phys., 53 (2012), 072701.

[169] G. Szego, Orthogonal polynomials, 4th ed., Providence, RI: Amer. Math. Sco.,(1975), pp. 34–35.

[170] H. Takayasu, Stable distribution and Levy process in fractal turbulence,Progress of Theoretical Physics, 72 (1984), pp. 471–479.

[171] R. A. Todor and C. Schwab, Convergence rates for sparse chaos approximationsof elliptic problems with stochastic coefficients, IMA J. Numer. Anal., 27(2)(2007),pp. 232–261.

[172] W. Trench, An algorithm for the inversion of finite Toeplitz matrices, Siam J.Control Optim. , 12 (1964) pp. 512522.

[173] E. I. Tzenova, Ivo J.B.F. Adan, V. G. Kulkarni, Fluid models with jumps,Stochastic Models, 21/1, (2005), pp. 37–55.

[174] W.N. Venables, B.D. Ripley, Modern Applied Statistics with S, Springer,4th edition, 2002.

[175] P. Vedula, P.K. Yeung, Similarly scaling of acceleration and pressurestatistics in numerical simulations of isotropic turbulence, Phys. Fluids, 11(1999), pp. 1208–1220.

[176] D. Venturi, X. Wan, R. Mikulevicius, B.L. Rozovskii, G.E. Karni-adakis, Wick-Malliavin approximation to nonlinear stochastic PDEs: analysisand simulations, Proceedings of the Royal Society, vol.469, no.2158, (2013).

Page 232: phd Thesis Mengdi Zheng (Summer) Brown Applied Maths

209

[177] J. A. Viecelli, Dynamics of two-dimensional turbulence, Physics of FluidsA: Fluid Dynamics (1989-1993), 2 (1990), pp. 2036–2045.

[178] G.A. Voth, K. Satyanarayan, E. Bodenschatz, Lagrangian accelera-tion measurements at. G/A, large Reynolds numbers, Phys. Fluids, 10 (1998),pp. 2268.

[179] J.B. Walsh, A stochastic model for neural response, Adv. Appl. Probab., 13(1981), pp. 231–281.

[180] J.B. Walsh, An Introduction to Stochastic Partial Differential Equations,Lecture Notes in Mathematics, Vol. 1180, Springer, Berlin, 1986.

[181] X. Wan, G. E. Karniadakis, An adaptive multi-element generalized polyno-mial chaos method for stochastic differential equations, J.Comput. Phys. 209(2)(2005), pp. 617–642.

[182] X. Wan, G.E. Karniadakis, Long-term behavior of polynomial chaos instochastic flow simulations, Computer Methods in Applied Mechanics and En-gineering, 195 (2006), pp. 5582–5596.

[183] G.C. Wick, The evaluation of the collision matrix, Phys. Rev. 80(2), (1950),pp. 268–272.

[184] D. Xiu, G. E. Karniadakis, The Wiener–Askey polynomial chaos for stochasticdifferential equations, SIAM J. Sci. Comput., 24 (2002), pp. 619–644.

[185] D. Xiu, J. S. Hesthaven, High-order collocation methods for differential equa-tions with random inputs, SIAM J. Scientific Computing 27(3) (2005), pp. 1118–1139.

[186] G. Yan and F. B. Hanson, Option pricing for a stochastic-volatility jump-diffusion model with log uniform jump-amplitudes, Proceedings of the 2006American Control Conference (2006), pp. 2989–2994.

[187] X. Yang, M. Choi, G. Lin, G.E. Karniadakis, Journal of ComputationalPhysics, 231 (2012), pp. 1587–1614.

[188] V.V. Yanovsky , A.V. Chechkin , D. Schertzer, A.V. Tur, Levyanomalous diffusion and fractional Fokker–Planck equation, Physica A, 282(2000), pp. 13–34.

[189] T. J. Ypma, Historical development of the Newton–Raphson method, SIAMReview 37 (4), (1995), pp. 531–551.

[190] G.M. Zaslavsky, Fractional kinetic equation for Hamiltonian chaos, 76(1994), pp. 110–122.

[191] G.M. Zaslavskii, R.Z. Sagdeev, A.A. Chernikov, Stochastic nature ofstreamlines in steady-state flows, Zh. Eksp. Teor. Fiz., 94 (1988), pp. 102–115.

[192] M. Zheng, X. Wan, G.E. Karniadakis, Adaptive multi-element polyno-mial chaos with discrete measure: Algorithms and application to SPDEs, Ap-plied Numerical Mathematics, 90 (2015), pp. 91–110.