national technical university of athenssandlab.mit.edu/papers/diploma_thesis_sapsis.pdf · 5.1.1....

225
NATIONAL TECHNICAL UNIVERSITY OF ATHENS SCHOOL OF NAVAL ARCHITECTURE & MARINE ENGINEERING Section of Naval & Marine Hydrodynamics Stochastic Analysis with Applications to Dynamical Systems by Themistoklis P. Sapsis Diploma Thesis Supervisor: G. A. Athanassoulis, Professor NTUA Athens September 2005

Upload: others

Post on 10-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

NATIONAL TECHNICAL UNIVERSITY OF ATHENS SCHOOL OF NAVAL ARCHITECTURE & MARINE ENGINEERING Section of Naval & Marine Hydrodynamics

Stochastic Analysis with Applications to Dynamical Systems

by

Themistoklis P. Sapsis

Diploma Thesis

Supervisor: G. A. Athanassoulis, Professor NTUA

Athens September 2005

Page 2: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

ii

Page 3: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

iii

Preface

This thesis deals with the analysis of multidimensional nonlinear dynamical systems subject to general external stochastic excitation. The method used to address this problem is based on the Characteristic Functional. As it is known, the characteristic functional includes the full probability structure of a stochastic process. Hence the knowledge of the characteristic functional induces the knowledge of every statistical quantity related with the stochastic process. For Stochastic Dynamical Systems, the joint Characteristic Functional of the response and the excitation is governed by linear Functional Differential Equations, i.e. differential equations with Volterra derivatives. This kind of equations appeared for the first time at the paper “Statistical Hydromechanics and Functional Calculus” by E. Hopf (1952) in the context of the analysis of Turbulence. Even thought such an equation contains the full probabilistic structure of the system response cannot be a straight forward method of solution of the problem. Apparently no practical methods have been found for the efficiently solution of this type of equations.

The present work deals with two very important issues, concerning Functional Differential Equations that describe Stochastic Dynamical Systems. First, we show that apparently all known equations partially describing the probability structure of the response under specific assumptions (such as Fokker-Planck Equation, Liouville Equation, Moment Equations, etc) can be derived directly through the Functional Differential Equation. Additionally, we derive some new (to the best our knowledge) Partial Differential Equations governing the characteristic functions of the response for systems under general stochastic excitation. In a second level, a new method is proposed for the solution of Functional Differential Equations. This method is related with the concepts of Kernel Probability Measures and Kernel Characteristic Functionals in infinite dimensional spaces, which are defined and discussed extensively. In this way we propose a method for treating the Functional Differential Equations at the infinite dimensional level, without any reduction to PDE’s. A simplification of the proposed method is implemented and a numerical example is presented.

This thesis is divided into five chapters. In Chapter 1 we briefly present the basic background needed for our study. We recall some

basic elements of set theory as well as the notions of probability measure and probability space. In the last section of the chapter we review the concept of random variable and give a summary of some results concerning their characteristic properties. The first part of Chapter 2 presents the concept of stochastic process as well as an extensive discussion concerning the Kolmogorov Theorem. This discussion will be very useful for Chapter 3 where we will define the stochastic process using the measure theoretic approach. The second part of chapter 2 deals with some classifications of the stochastic processes. Stochastic processes are categorized with respect to their memory, their regularity and their ergodic properties. Chapter 3 deals with the stochastic processes through the measure theoretic approach. In the first two sections we extensively discuss the properties of a probability measure defined on a Banach space as well as its finite dimensional produced measures. In Sections 3.3-3.5 the characteristic functional is defined and studied. Especially we study the connection of characteristic functional with more conventional statistical quantities describing the stochastic processes, such as moments,

Page 4: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

iv

characteristic functions etc. Finally, in the three last sections of the chapter, we study specific cases of probability measures/characteristic functionals, such as the Gaussian and the Poisson. Special attention has been given to the characteristic functional introduced by V. Tatarskii, which induces as special (or asymptotic) cases almost every known characteristic functional. Before we proceed to the general study of Stochastic Dynamical Systems, we present the topic of mean-square calculus. Although classical, it’s very important for a complete description of Stochastic Differential Equations. Among the topics we examine, of special interest is the presentation for the Integral of stochastic processes where we distinguish the case of integration of a stochastic process with respect to another independent stochastic process and the case of integration over Martingales with non-anticipative dependence. At the end of the chapter we present some results and criteria for the sample path properties of stochastic process. Finally in the beginning of Chapter 5 the notion of a stochastic differential equation is defined and some general theorems concerning the existence and uniqness of its solutions are proved. Moreover, necessary conditions for bounded of all moments of the probability measure describing the system response are given. Section 5.2 deals with the formulation of functional differential equations for the joint characteristic functional describing (jointly) the probability structure of the excitation and the response. The formulation concerns stochastic systems described by nonlinear ODE’s with polynomial nonlinearities. We distinguish between the case of a m.s.- continuous stochastic excitation and an orthogonal-increment excitation. Before the general results we present two specific systems, the Duffing oscillator and the van der Poll oscillator. In the next section 5.3 we prove how the general functional differential equation describing the probability structure of a system can produce a family of partial differential equations for the characteristic functions of various orders. Special cases are the Fokker-Planck equation, as well as the Liouville equation. Of great importance are the partial differential equations proved at Section 5.3.4 that describe the probability structure of the system for the case of m.s.-continuous excitation. In the last Section 5.5 we introduce the notion of the kernel characteristic functional that generalizes the corresponding idea of kernel density functions in infinite dimensional spaces. We prove that the Gaussian characteristic functional has the kernel property, thus being an efficient tool for the study of FDEs. The proof is based on existent theorems for extreme values of stochastic processes. In Section 5.4.4 we discuss how the kernel characteristic functionals can be efficiently used for the solution of functional differential equations. The basic idea is the localization in the physical phase space domain (probability measure) expressed in terms of the characteristic functional. It should be noted that the Gaussian measures are of great importance in this direction since most (if not all) of the existing analytical results concerning infinite dimensional integrals pertain Gaussian measures. In this way we derive a set of partial differential equations that governs the functional parameters of the kernel characteristic functionals. Finally, in Section 5.4.5, we a simplified set of equations is derived in the basis of specific assumptions concerning the probability interchange between probability kernels. The latter equations are applied for the numerical study of a simple nonlinear ODE.

Page 5: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

v

Acknowledgments

Acknowledgments are owed to everyone who helped and assisted me, each one on his/her own way, during the elaboration of this thesis. However, let me take this opportunity to thank some people individually.

First, I would like to express my deep gratitude to the supervisor of this work, Prof. Gerassimos A. Athanassoulis, for his guidance, encouragement and support throughout the whole duration of my studies at the school of Naval Architecture and Marine Engineering of NTUA. I would like to thank him for bringing me in touch, through his inspiring teaching, with various mathematical problems and especially the examined one. This thesis could never have been written or even conceived without all the stimulating discussions that we had the last four years.

The valuable contribution of Mr. Panagiotis Gavriliadis, PhD candidate, through the critical discussions as well as of Mr. Agissilaos Athanassoulis, PhD candidate, through his stimulating comments that led to a number of improvements, are greatly appreciated.

Finally I have to thank my family Panagiotis, Euaggelia and Pantelis Sapsis as well as Kiriaki Kofiani, student of NTUA. This thesis could never have been taken the current form without their continuous and unconditional love and support. This thesis is dedicated to them with immense gratitude.

Th. Sapsis September 2005

Page 6: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

vi

Page 7: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

vii

Contents

Preface iii

Acknowledgments v

Contents vii

List of Symbols xi

Chapter 1: BACKGROUND CONCEPTS FROM PROBABILITY AND ANALYSIS

1.1. Elements of set theory 2

1.1.1. Set operations 2

1.1.2. Borel Fields 4

1.2. Axioms of Probability – Definition of Probability Measure 5

1.3. Random Variables 7

1.3.1. Distribution Functions and Density Functions 7

1.3.2. Moments 9

1.3.3. Characteristic Functions 10

1.3.4. Gaussian Random Variables and the Central Limit Theory 13

1.3.5. Convergence of a Sequence of Random Variables 16

1.3.5.a. Mean-Square Convergence 16

1.3.5.b. Convergence in Probability 17

1.3.5.c. Almost Sure Convergence 17

1.3.5.d. Convergence in Law or Distribution 17

1.3.5.e. Relationships Between the Four Modes of Convergence 17

1.4. References 18

Page 8: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

viii

Chapter 2: STOCHASTIC PROCESSES. DEFINITION AND CLASSIFICATION

2.1. Basic Concepts of Stochastic Processes 20

2.1.1. Definition of a Continuous-parameter Stochastic Process 20

2.1.2. Moments 23

2.1.3. Characteristic Functions 24

2.2. Classification of Stochastic Process 26

2.2.1 Classification of Stochastic Process based upon Memory 26

2.2.1.a Purely Stochastic Process - Generalized Stochastic Process 26

2.2.1.b Markov Process 29

2.2.1.c Independent Increment Process 31

2.2.1.d. Martingales 38

2.2.2. Classification of Stochastic Process based upon Regularity 41

2.2.2.a Stationary and Wide-Sense Stationary Processes 41

2.2.2.b Spectral Densities and Correlation Functions 42

2.2.3. Classification of Stochastic Process based upon Ergodicity 45

2.2.4 Gaussian Process 47

2.3. References 49

Chapter 3: PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONAL

IN HILBERT (FUNCTION) SPACES

3.1. Probability Measures in Hilbert Spaces and their finite-dimensional relatives 52

3.1.1. Stochastic Processes as Random variables defined on Polish Spaces 52

3.1.2. Cylindric Sets 53

3.1.3. Restriction of a probability measure over a Polish space, to finite-dimensional subspaces 54

3.1.4. Conditions for the existence of probability measures over a Polish space 55

3.2. Cylinder functions and their Integral 59

3.3. Characteristic Functional of the Probability Measure 61

3.4. Mean and Covariance Operator for Probability Measures 69

3.5. Moments from the Characteristic Functional 72

3.6. Characteristic Functions from the Characteristic Functional 75

3.7. Gaussian Measure 77

Page 9: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

ix

3.8. Tatarskii Characteristic Functional 80

3.9. Characteristic Functionals derived from Tatarskii Functional 85

3.10. References 90

Chapter 4: STOCHASTIC CALCULUS – PRINCIPLES AND RESULTS

4.1. Mean-Square Calculus of 2nd -Order Process (smooth and non-smooth) 94

4.1.1. Preliminaries 94

4.1.2 Mean-Square Convergence 94

4.1.3 Mean-Square Continuity 96

4.1.4 Mean-Square Differentiation 97

4.1.5 Mean-Square Integration I – Integration of a Stochastic Process with respect to another

(Independent) Process 100

4.1.6 Mean-Square Integration II – Integration over Martingale Stochastic Process with non-

anticipative dependence 103

4.1.6.a A formula for changing variables (Ito Formula) 106

4.1.6.b Stochastic Differentials 107

4.2. Analytical Properties of Sample Functions 110

4.2.1. Sample Function Integration 110

4.2.2. Sample Function Continuity 111

4.2.3 Sample Function Differentiation 113

4.3. References 116

Chapter 5: PROBABILISTIC ANALYSIS OF THE RESPONCES OF DYNAMICAL

SYSTEMS UNDER STOCHASTIC EXCITATION

5.1. Stochastic Differential Equations – General Theory 119

5.1.1. General Problems of the Theory of Stochastic Differential Equations 119

5.1.2. The Existence and Uniqueness Theorems of Stochastic Differential Equations 124

5.1.3. Bounds on the Moments of Solutions of Stochastic Differential Equations 128

5.1.4. Continuous dependence on a parameter of solutions of stochastic equations 129

Page 10: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

x

5.2. Hopf-type Functional Differential Equations (FDE) for the Characteristic Functional of

Dynamical Systems 133

5.2.1. Duffing oscillator 133

5.2.2. Van Der Poll Oscillator 135

5.2.3. Non-Autonomous Dynamical Systems under Stochastic Excitation I – Case of m.s.

Continuous Excitation 136

5.2.4. Non-Autonomous Dynamical Systems under Stochastic Excitation II – Case of Independent

Increment Excitation 140

5.3. Reduction of the Hopf-type FDE to PDEs for (joint) characteristic functions or (joint)

probability densities 144

5.3.1. Systems with Random Initial Conditions 144

5.3.2. Moment Equations from the FDE 147

5.3.3. Excitation with Independent Increments - Fokker-Planck Equation 150

5.3.4. Partial Differential Equation for (N+M)-dimensional joint (excitation-response) characteristic

functions in the case of general m.s. Continuous Excitation 160

5.4. Kernel representations for the Characteristic Functional 169

5.4.1 Exact Formula for Integrals of Special Functionals 169

5.4.2 Integrals of Variations and of Derivatives of Functionals 177

5.4.3 Kernel Characteristic Functionals 179

5.4.4. Superposition of Kernel Characteristic Functionals for the numerical solution of FDEs 186

5.4.5. Simplification of the set of FDE’s governing the Kernel Characteristic Functionals 194

5.5. References 201

Subjects for Future Research 203

Appendix I : Volterra Derivatives of Gaussian Characteristic Functional 205

Appendix II : Optimization algorithm 207

Appendix III : Application on Ship Stability and Capsizing on Random Sea 209

Index

Page 11: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

xi

List of Symbols Ω Space of all possible events of a random experiment

( )ΩU σ – field of subsets of Ω

c Probability measure on the ( )ΩU

cX Induced probability measure of the random variable/stochastic processX

XF Probability distribution function of the random variable ( )X β

Xf Probability density function of the random variable ( )X β

( )E β i Mean value operator

Xφ Characteristic function of the random variable ( )X β

[ ],m σN Gaussian random variable with mean m and variance σ

I ⊆ Time interval

( )ΗΠ i Linear projection operator to the subspace H .

( )D I Space of real valued functions on I ⊆

m Space of random variables/stochastic processes with finite second moment

( ) ( )cC I D I∞ ⊂ Space of infinite differentiable functions with compact support

( )t ΩU current of σ−algebras

Sρ Sphere with radius ρ

Y X Characteristic functional of the probability measure c X

,i i Dual product

S Space of nuclear operators

mc Mean value element of the probability measure c

Cc Correlation operator of the probability measure c

2V Space of twice differentiable functionals

( )[ ]x zδY Frechet derivative of the functional Y at the direction z

( )x t

δδ

Y Volterra derivative of the functional ( )iY at the point t

Page 12: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

xii

Page 13: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

1

Chapter 1 Background Concepts from Probability

The mathematical theory of probability and the basic concepts of random variables form the theoretical background of the present work. The reader is assumed to be familiar with these concepts, so only some basic definitions and results are summarized in this chapter mainly for establishing a consistent system of notation, and for later reference. In Section 1.1 we recall some basic elements of set theory and give the definition of a Borel field that will be used extensively in what follows. Section 1.2 deals with the notion of the probability measure. We present the three axioms of probability and discuss the completeness of the mathematical description of a random experiment. Finally in the last section we give the definition of a general random variable and is given the various probability density functions and characteristic functions. All analytical means used for describing its probability structure are defined and briefly discussed. For more detailed discussions the reader is referred to the literature ATHANASSOULIS, G.A., (Stochastic Modeling and Forecasting of Ship Systems) and FRISTEDT, B. & GRAY. L. (A Modern Approach to Probability Theory).

Page 14: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2 CHAPTER 1 BACKGROUND CONCEPTS FROM PROBABILITY

1.1. Elements of Set Theory

Events and combinations of events occupy a central place in probability theory. The mathematics of events is closely tied to the theory of sets, and this section constitutes a summary of some elements of this theory. A set is a collection of arbitrary objects. These objects are called elements of the set and they can be of any kind with any specified properties. We may consider, for example, a set of numbers, a set of points or a set of functions. A set containing no elements is called an empty set and .We distinguish between sets containing a finite number of elements and those having an infinite number. They are called, respectively, finite sets and infinite sets. An infinite set is called enumerable or countable if all its elements can be arranged in such a way that there is a one-to-one correspondence between them and all positive integers; thus, a set containing all positive integers 1,2,... is a simple example of an enumerable set. A nonenumerable or noncountable set is one where the above mentined one-to-one correspondence cannot be established. A simple example of nonenumerable set is the set of all points on a straight line segment. If every element of a set A is also an element of a set B , the set A is called a subset of B and this is represented symbolically by A B⊂ (1) It is clear that an empty set is a subset of any set. In the case where both A B⊂ and B A⊂ hold, the set A is then equal to B, and we write A B= (2) We now give meaning to a particular set we shall called space. In our development, we shall consider only sets which are subsets of a fixed (nonempty) set. This "largest" set containing all elements of all the sets under consideration is called the space, and it is denoted by the symbol Ω . The class of all the sets in Ω is called the space of sets in Ω . It will be denoted by ( )Ω_ .

Consider a subset A in Ω . The set of all elements in Ω which are not elements of A is called the complement of A , and we denote it by A′ . The following relations clearly hold:

Ω =′ ∅ , =Ω′∅ , ( )′′Α =Α

1.1.1. Set Operations Let us now consider operations of sets A,B,C, … which are subsets of space Ω . We are primarily concerned with addition, subtraction, and multiplication of these sets. The union or sum of A and B, denoted by A Β∪ , is the set of all elements belonging to A or B

or both.

The intersection or product of A and B, written as A Β∩ is the set of all elements which are

common to A and B.

Page 15: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

1.1. ELEMENTS FROM SET THEORY 3

If A Β =∅∩ , the sets A and B contain no common elements, and we call A and B mutually

exclusive or disjoint. The symbol " + " shall be reserved to denote the union of two disjoint sets.

The definitions of the union and the intersection can be directly generalized to those involving an arbitrary number (finite or countably infinite) of sets. Thus, the set

1 21

n

n jj

A A A A=

=…∪ ∪ ∪ ∪

stands for the set of all elements belonging to one or more of the sets , 1, 2, ,jA j n= … . The

intersection

1 21

n

n jj

A A A A=

=…∩ ∩ ∩ ∪

is the set of all elements common to all , 1, 2, ,jA j n= … . The sets , 1, 2, ,jA j n= … are called

mutually exclusive if

for every , 1, 2, ,i jA A i j n= …∩ (3)

It is easy to verify that the union and the intersection operations of sets are associative, commutative, and distributive, that is

( ) ( )A B C A B C A B C= =∪ ∪ ∪ ∪ ∪ ∪ (4)

A B B A=∪ ∪ (5)

( ) ( )A B C A B C A B C= =∩ ∩ ∩ ∩ ∩ ∩ (6)

A B B A=∩ ∩ (7)

( ) ( ) ( )A B C A B A C=∩ ∪ ∩ ∪ ∩ (8)

Clearly, we also have

A A A=∪ , A A∅ =∪ , A ∅ =∅∩

Ω ΩA =∪ , A AΩ =∩ , A A′ = ∅∩

Moreover, the following useful relations hold

( ) ( ) ( )A B C A B A C=∪ ∩ ∪ ∩ ∪ (9)

( )A B A B′ ′ ′=∩ ∪ (10)

A B A A B′=∪ ∪ ∩ (11)

The last two relations are referred to as De Morgan's law.

Page 16: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

4 CHAPTER 1 BACKGROUND CONCEPTS FROM PROBABILITY

Finally, we define the difference between two sets. The difference A B− is the set of all elements belonging to A but not to B . From the definition, we have the simple relations

A A−∅ = , Ω A A′− = , A B A B′− = ∩

1.1.2. Borel Fields Definition 1.1 : Given a space Ω , a Boret field, or a σ – field U of subsets of Ω is a class of, in general noncountable, subsets , 1, 2,jA j = … , having the following properties:

1. Ω∈U 2. If A∈U then A ∈′ U

3. If , 1,2, ,jA j∈ = …U then 1

jj

A∞

=

∈∪ U .

The first two properties imply that Ω∅− ∈′ U (12) and, with the aid of De Morgan's law, the second and the third properties lead to

1 1

j jj j

A A∞ ∞

= =

′⎛ ⎞= ⎜ ⎟⎝ ⎠

∈′∩ ∪ U (13)

A Borel field is thus a class of sets, including the empty set ∅ and the space Ω , which is closed under all countable unions and intersections of its sets. It is clear, of course, that a class of all subsets of Ω is a Borel field. However, in the development of the basic concepts of probability, this particular Borel field is too large and impractical. We in general consider the smallest class of subsets of Ω which is a Borel field and it contains all sets and elements under consideration.

Page 17: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

1.2. AXIOMS OF PROBABILITY – DEFINITION OF A PROBABILITY MEASURE 5

1.2. Axioms of Probability – Definition of Probability Measure

The basic concepts of probability theory are centered around the idea of a random experiment E , whose outcomes are events. Given a random experiment E the collection of all possible events is called a sample space, whose elements are the simple events. Observable events enter the sample space as its subsets. The definitions of events and sample space provide a framework within which the analysis of events can be performed, and all definitions and relations between events in probability theory can be described by sets and set operations in the theory of sets. Consider the space Ω of elements ω and with subsets , ,A B …

In our subsequent discussion of probability theory, we shall assume that both Ω and ∅ are observable events. We also require that the collection of all observable events associated with a random experiment E constitutes a Borel field U which implies that all events formed through countable unions and intersections of observable event are observable, and they are contained in U .

We now introduce the notion of a probability function. Given a random experiment E , a finite number ( )Ac is assigned to every event A in the Borel field U of all observable events.

The number ( )Ac is a set function and is assumed to be defined for all sets in U . We thus

have the following

Definition 2.1 : Let the space Ω of elements ω and the Boret field ( )ΩU generated by the

open subsets of Ω . The set function ( ) [ ]: 0,1Ω →c U will be a probability function or

probability measure iff the following axioms hold

1. ( ) 0A ≥c for every ( )ΩA∈U

2. ( ) 1Ω =c

3. for any countable collection of mutually disjoint sets

( ) ( )1 2, ,..., :Ω j jjj

A A in A A⎧ ⎫⎪ ⎪ =⎨ ⎬⎪ ⎪⎩ ⎭

∑∪U c c

Hence, Axioms 1-3 define a countably additive and nonnegative set function ( ) ( ), ΩA A∈c U . The following properties associated with the probability function can be

easily deduced from the axioms stated above

( ) 0∅ =c (1)

If A B⊂ , then

( ) ( )A B≤c c (2)

We also see that in general,

( ) ( ) ( ) ( )A B A B A B= + −∪ ∩c c c c (3)

Page 18: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

6 CHAPTER 1 BACKGROUND CONCEPTS FROM PROBABILITY

A mathematical description of a random experiment is now complete. As we have seen above, it consists of three fundamental constituents: A sample space Ω , a Borel field ( )ΩU of

observable events, and the probability function ( ) [ ]: 0,1Ω →c U . These three quantities

constitute a probability space associated with a random experiment, and it is denoted by ( )Ω , ,U c .

Page 19: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

1.3. RANDOM VARIABLES 7

1.3. Random Variables

Consider a random experiment E whose outcomes β are elements of Ω in the underlying probability space ( )Ω , ,U c . These outcomes are in general represented by real numbers,

functions, or objects of various kinds. In order to construct a model for a random variable, we assume that for all experiments it is possible to assign a real number , function, or other object of various kinds ( )X ω for each ω following a certain set of rules. The "number" ( )X ω is

really a point function defined over the domain of the basic probability space. We have the following

Definition 3.1 : Let ( )Ω , ,U c be a probability space, :X Ω→ X a point function and

UX a Borel field over X . The mapping X is a random variable, iff is ( ),U UX -

measurable. In the above definition the probability space ( )Ω , ,U c is modelling a stochastic

experiment and the set X is modelling the quantities, at those we focus our attention. The mapping :X Ω→ X is modelling the procedure of taking the quantity ( )X ω . The above value

is a realization of the random variable.

Definition 3.2 : Let ( )Ω , ,U c be a probability space, ( ), UXX a measurable space and

:X Ω→ X a random variable. On the Borel field UX of the measurable space ( ), UXX we

define the induced probability measure

( ) ( )( )1 , for every B X B B−= ∈c c UX X (1)

The produced probability space ( ), ,U cX XX , will be called as X − induced probability

space. 1.3.1. Distribution Functions and Density Functions Given a random experiment with its associated r.v. ( )X β and given a real number x, let us

consider the probability of the event ( ) : X xβ β ≤ , or simply ( )( )X X xβ ≤c . This

probability clearly depends, upon x . The function

( ) ( )( )X Xx X xF β= ≤c (2)

is defined as the distribution function of ( )X β . In the above, the subscript X identifies the r.v.

The distribution function always exists. By definition, it is a nonnegative, continuous to the right, and nondecreasing function of the real variable x . Moreover, we have

( ) 0XF −∞ = and ( ) 1XF ∞ = (3)

Page 20: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

8 CHAPTER 1 BACKGROUND CONCEPTS FROM PROBABILITY

A r.v. ( )X β is called a continuous r.v. if its associated distribution function is continuous and

differentiable almost everywhere. It is a discrete r.v. when the distribution function assumes the form of a staircase with a finite or countably infinite jumps. For a continuous r.v. ( )X β , the

derivative

( )( )X

X

xf x

dxdF

= (4)

in this case exists and is called the density function of the r.v. X. It has the properties,

( ) 0Xf x ≥ (5)

( ) ( ) ( )b

X X Xa

f x dx b aF F= −∫ (6)

( ) 1Xf x dx∞

−∞

=∫ (7)

On the other hand, the density function of a discrete r.v. or of a r.v. of the mixed type does not exist in the ordinary sense. However, it can be constructed with the aid of the Dirac delta function. Consider the case where a r.v. X takes on only discrete values 1 2, , , nx x x… . A

consistent definition of its density function is

( ) ( )1

n

X j jj

f x p x xδ=

= −∑ (8)

where

( )( )j X jp X xβ= =c (9)

This definition of ( )Xf x is consistent in the sense that it has all the properties indicated by

equations (5)-(7), and the distribution function ( )X xF is recoverable from equation (4) by

integration.

The definition of the distribution function and the density function can be readily extended to the case of many random variables. Thus, the joint distribution function of a sequence of n r.v.'s, ( ) nX β , is defined by

( ) ( )1 2 1 21 2 1 1 2 2, , ,n nX X X n X X X n nx x x X x X x X xF = ≤ ≤ ≤… …… …∩ ∩ ∩c (10)

The corresponding joint density function is

( ) ( )1 2 1 21 2 1 2 1 2, , , , , , /

n n

nX X X n X X X n nx x x x x x x x xf F= ∂ ∂ ∂ ∂… …… … … (11)

if the indicated partial derivative exists. The finite sequence ( ) nX β may be regarded as the

components of an n -dimensional random vector ( )βX . The distribution function and the

density function of ( )βX are identical with (10) and (11), but they may be written in the more

Page 21: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

1.3. RANDOM VARIABLES 9

compact forms ( )F xX and ( )xfX , where x is a vector with components 1 2, , , nx x x… . In the

sequel, the existence of the density functions shall always be assumed. 1.3.2. Moments Some of the most useful information concerning a random variable is revealed by its moments, particularly those of the first and second order. The nth moment of a r.v. ( )X β is defined by

( )( )( )( )

for discrete r.v.

for continuous r.v

ni i

n i

nX

x pE X

x f x dx

β β +∞

−∞

⎧⎪⎪= ⎨⎪⎪⎩

∫ (12)

if ( )nXx f x dx

+∞

−∞∫ is finite. The first moment ( )( )E Xβ β gives the statistical average of the r.v.

( )X β . We will usually denote it by Xm .

The central moments of a r.v. ( )X β are the moments of ( )X β with respect to its mean.

Hence, the nth central moment of ( )X β , is defined as

( )( )( )( )

( ) ( )

for discrete r.v.

for continuous r.v

ni X i

n iX

nX X

x m pE X m

x m f x dx

β β +∞

−∞

⎧−⎪

⎪− = ⎨⎪ −⎪⎩

∫ (13)

The moments of two or more random variables are defined in a similar fashion. The joint moments of the r.v.'s ( )X β and ( )Y β , are defined by

( )( ) ( )( )( )( )

,for discrete r.v.

for continuous r.v

n ni j ij

i jn m

n mXY

x y p

E X Yx y f x dx dy

β β β+∞

−∞

⎧⎪⎪⋅ = ⎨⎪ ⋅ ⋅⎪⎩

∫ (14)

The central joint moments of the r.v.'s ( )X β and ( )Y β , are defined by

( )( ) ( )( )( )( ) ( )

( ) ( ) ( )

,for disc. r.v.

for cont. r.v

mni X j Y ij

i jn mX Y

n mX Y XY

x m y m p

E X m Y mx m y m f x dx dy

β β β+∞

−∞

⎧− −⎪

⎪− − = ⎨⎪ − −⎪⎩

∫ (15)

when they exist.

For 1n m= = we have the covariance function ( ) ( )( )XYR E X Yβ β β= ⋅ (16)

Page 22: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

10 CHAPTER 1 BACKGROUND CONCEPTS FROM PROBABILITY

and the correlation function ( )( ) ( )( )( )XY X YC E X m Y mβ β β= − ⋅ − (17)

Obviously, we have XY XY X YC R m m= − (18)

The r.v.'s ( )X β and ( )Y β , are said to be uncorrelated if

0XYC = (19)

which implies

( ) ( )( ) ( )( ) ( )( )E X Y E X E Yβ β ββ β β β⋅ = ⋅ (20)

In the particular case when

( ) ( )( ) 0E X Yβ β β⋅ = (21)

the r.v.'s are called orthogonal.

It is clearly seen from Eq. (18) that two independent r.v.'s with finite second moments are uncorrelated. We point out that the converse of this statement is not necessarily true. In closing this subsection, we state two inequalities involving the moments of random variables which will be used extensively in the latter chapters.

Tchebycheff Inequality.

( )( )( )( )n

n

E XX k

k

β ββ ≥ ≤c (22)

Hölder Inequality.

( ) ( )( ) ( )( ) ( )( )1 1

n mn mE X Y E X E Yβ β ββ β β β⎡ ⎤ ⎡ ⎤⋅ ≤ ⋅⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

(23)

where,

, 1n m> , with 1 1 1n m+ =

The proof of the above inequalities can be found at KOLMOGOROV, A.N. & FOMIN, S.V, (Introductory Real Analysis). 1.3.3. Characteristic Functions Definition 3.3 : Let ( )X β be a r.v. with density function ( )Xf x . The characteristic function

of ( )X β , ( )X uφ is defined by

( ) ( )( )

for discrete r.v.

for continuous r.v

jiuxj

jiuXX

iuxX

e p

u E ee f x dx

βφ ∞

−∞

⎧⎪⎪⎪⎪⎪⎪= =⎨⎪⎪⎪⎪⎪⎪⎩

∫ (24)

Page 23: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

1.3. RANDOM VARIABLES 11

It is seen that the characteristic function ( )X uφ and the density function ( )Xf x form a Fourier

transform pair. Since a density function is absolute integrable, its associated characteristic function always exists. Furthermore, it follows from the theory of Fourier transforms that the density function is uniquely determined in terms of the characteristic function by

( ) ( )12

iuxX Xf x e u duφ

π

∞−

−∞

= ∫ (25)

Eq. (25) points out one of the many uses of a characteristic function. In many physical problems, it is often more convenient to determine the density function of a random variable by first determining its characteristic function and then performing the Fourier transform as indicated by equation (25). Another important property of the characteristic function is its simple relation with the moments. The MacLaurin series of ( )X uφ gives

( ) ( ) ( ) ( )2

02X X X Xuu u u uφ φ φ φ′ ′′= + + +… (26)

Now, we see that

( ) ( )0 1X Xf x dxφ∞

−∞

= =∫

( ) ( ) ( )( )0X Xi x f x dx iE Xβφ β∞

−∞

′ = =∫

( ) ( ) ( ) ( )( )( )0nn n n n

X Xi x f x dx i E Xβφ β∞

−∞

= =∫ (27)

The joint characteristic function of many r.v.’s ( ) iX β is defined by

( )

( )

1

1 2

1

1

1 2

1 2 1

, , ,

, , ,

n

j jj

n

n

j jj

n

i u X

X X X n

i u x

X X n n

u u u E e

e f x x x dx dx

βφ =

=

∞ ∞

−∞ −∞

⎛ ⎞⎟∑⎜ ⎟⎜ ⎟⎜= =⎟⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

∑= ∫ ∫

… … …

(28)

Analogous to the one-random-variable case, the joint density function ( )1 2 1 2, , ,

nX X X nf x x x… … is

uniquely determined in terms of ( )1 2 1 2, , ,

nX X X nu u uφ … … by the n-dimensional Fourier transform

( )

( )( )

1 2

1

1 2

1 2

1 2 1 2

, , ,

1 , , ,2

n

n

j jj

n

X X X n

i u x

X X X n nn

f x x x

e u u u du du duφπ

=

∞ ∞ ∞

−∞−∞ −∞

=

∑= ∫ ∫ ∫

… … … (29)

The moments, if they exist, are related to ( )1 2 1 2, , ,

nX X X nu u uφ … … as

Page 24: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

12 CHAPTER 1 BACKGROUND CONCEPTS FROM PROBABILITY

( )( ) ( )( ) ( )( )( )

( )

1 2

1

1 21 1

1 2

1 0,0, ,0

n

n

nn n

m m mn

m m

X X Xm m mm

E X X X

i u u

β β β β

φ+ +

+ +

=

∂= ⋅

∂ ∂

……

……

(30)

Since a density function is absolute integrable, its associated characteristic function always exists. Here are some of the basic properties of the characteristic function:

1) ( )0,0, ,0 1φ =…

2) ( ) ( )φ φ= −u u

3) ( )φ u is uniformly continuous for each n∈u

4) ( )φ u is a positive–definite function

Concerning the characterization of characteristic functions, there is the following

Theorem 3.4 [Bochner] : The function : ng → satisfies the above conditions if and only if there exists an n-dimensional random vector ( )βX , whose characteristic function is ( )g u .

Proof : The “if” assertion is immediate, since, if g is a characteristic function with density function G , then, letting u and υ range over an arbitrary but finite set in R ,

( ) ( ) ( ) ( ) ( ) ( ) ( )

( ) ( )

, ,

2

0

i u x

u u

iux

u

g u h u h e h u h dG x

e h u dG x

υ

υ υ

υ υ υ−− = =

= ≥

∑ ∑∫

∑∫

Conversely, let g on R be nonnegative-definite and continuous. It coincides on R with a characteristic function. If it does do on the set rS (dense in R ) of all rationals of the form of the form / 2nk , 0, 1, 2,k = ± ± … , 1, 2,n = … . For every integer n , let nS be the corresponding subset of all rationals of the form / 2nk so that n rS S↑ . Since g is nonnegative-definite on R , it is nonnegative-definite on every nS . Therefore, by Herglotz Lemma (see

LOÈVE, Probability Theory) there exists characteristic functions such that ( ) ( )/ 2 / 2n nng k f k=

whatever be k and n . Since n rS S↑ , it follows that nf g→ on rS . Let 0 θ≤ , 1nθ ≤ , so that, by Herglotz Lemma,

( ) ( ) ( )

( ) ( )

1 Re / 2 1 cos 2

1 cos 2 1 Re 1/ 2

n nn n

n nn

f x dF x

xdF x f

π

π

π

π

θ θ−

− = − ≤

≤ − = −

Therefore, by the elementary inequality 2 2 22 2a b a b+ ≤ + and the increments inequality,

for fixed ( ) / 2nn nh k θ= + ,

Page 25: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

1.3. RANDOM VARIABLES 13

( ) ( ) ( )( )( ) ( )( )

221 2 1 / 2 4 1 Re / 2

2 1 / 2 4 1 Re 1/ 2

n nn n n n n

n nn

f h f k f

g k g

θ− ≤ − + − ≤

≤ − + −

Since g is continuous at the origin, it follows that the sequence nf is equicontinuous. Hence, by Ascoli’s Theorem, it contains a subsequence converging to continuous function f so that g f= on rS and hence on R . Since by the continuity theorem f is a characteristic function, the proof is complete.

The “only if” assertion can be proved directly, and this direct proof will extend to a more general case: For every 0T > and x R∈

( ) ( ) ( )

0 0

1 0T T

i u xTp x g u e dud

Tυυ υ− −= − ≥∫ ∫

since, g on R being nonnegative-definite and continuous, the integral can be written as a limit of nonnegative Riemann sums. Let u tυ= + , integrate forst with respect to υ and set

( ) ( )1T

tg t g t

T

⎛ ⎞⎟⎜ ⎟= −⎜ ⎟⎜ ⎟⎜⎝ ⎠ or 0 according as t T≤ or t T≥ . The above relation becomes

( ) ( )1 0itxT Tp x g t e dt

T−= ≥∫

Now multiply both sides by 1 12

iuxxe

⎛ ⎞⎟⎜ ⎟−⎜ ⎟⎜ ⎟⎜⎝ ⎠ and integrate with respect to x on ( ),X X− . The

relation becomes

( )( )

( )( )

2

2

1sin1 1 21 12 2

4

Xiux

T TX

X t uxp x e g t dt

X X t uπ π−

⎛ ⎞⎟⎜ − ⎟⎜⎛ ⎞ ⎟⎜⎝ ⎠⎟⎜ ⎟− =⎜ ⎟⎜ ⎟⎜⎝ ⎠ −∫ ∫

The l.h.s. is a characteristic function (since its integrand is a product of iuxe by a nonnegative function) and the r.h.s. converges to ( )Tg u as X →∞ . Therefore, Tg is the limit of a

sequence of characteristic functions. Since it is continuous at the origin, the continuity theorem applies and Tg is a characteristic function. Since Tg g→ as T →∞ , the same theorem applies, and the assertion is proved.

An extensive discussion for the characteristic functions and their properties can be found at ATHANASSOULIS, G.A., (Stochastic Modeling and Forecasting of Ship Systems) and LOÈVE, (Probability Theory, p.198-213). 1.3.4 Gaussian Random Variables and the Central Limit Theory The most important distribution of random variables which we encounter in physical problems is the Gaussian or normal distribution.

Page 26: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

14 CHAPTER 1 BACKGROUND CONCEPTS FROM PROBABILITY

Definition 3.5 : A r.v. ( )X β is Gaussian or normal if its density function ( )Xf x has the form

( )( )2

2

1 exp22X

x mf x

σσ π

⎧ ⎫⎪ ⎪−⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎪ ⎪⎩ ⎭ (31)

Its distribution function is the integral

( )( )2

2

1 exp22

x

X

t mF x dt

σσ π−∞

⎧ ⎫⎪ ⎪−⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎪ ⎪⎩ ⎭∫ (32)

It is seen that a Gaussian r.v. is completely characterized by the two parameters m andσ ; they are, as can be easily shown, the mean and the standard deviation of ( )X β , respectively.

Usually we donote it as [ ],m σN .

Alternatively, a Gaussian r.v. can be defined in terms of its characteristic function. It is of the form

( ) 2 21exp2X u ium uφ σ

⎧ ⎫⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭ (33)

As indicated by equation (30), the moments of ( )X β can be easily derived from equation (33)

by successively differentiating ( )uφ with respect to u and setting 0u = . The results, expressed

in terms of its central moments, are

( )( )( ) ( )0 , odd1 3 1 , even

nX n

nE X m

n nβ β

σ⎧⎪− = ⎨ ⋅ ⋅ ⋅ − ⋅⎪⎩ …

(34)

In addition to having these simple properties, another reason for the popularity of a Gaussian r.v. in physical applications stems from the central limit theorem stated below. Rather than giving the theorem in general terms, it serves our purpose quite well by stating a more restricted version due to Lindeberg.

Theorem 3.6 [Central Limit Theorem] : Let ( ) nX β be a sequence of mutually

independent and identically distributed r.v.'s with means m and variances 2σ . Let

( ) ( )1

n

jj

X Xβ β=

=∑ (35)

and let the normalized r.v. ( )Y β be defined as

( )( )

12

X nmY

n

ββ

σ

−= (36)

Then the distribution function of ( )Y β , ( )YF y , converges to the zero mean, unit variance Gaussian distribution as n →∞ for every fixed y .

Proof : We first remark that the r.v. ( )Y β is simply the "normed" r.v. ( )X β with mean zero

and variance one. In terms of the characteristic functions ( )X uφ of the r.v.'s ( )jX β the

characteristic function ( )Y uφ of ( )Y β has the form

Page 27: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

1.3. RANDOM VARIABLES 15

( ) ( ) ( ) ( )1 12 2expiuY

Y Xu E e ium n u nβφ σ φ σ⎡ ⎤= = −⎢ ⎥⎢ ⎥⎣ ⎦

(37)

In view of the expansion (26), we can write

( ) ( ) ( ) 22 21 1

2 21

2exp 1

2

n

Y

m iuu ium n ium nn

σφ σ σ

σ

⎧ ⎫⎡ ⎤⎪ ⎪⎛ ⎞+⎪ ⎪⎢ ⎥⎟⎪ ⎜ ⎪⎟= − + + + =⎜⎨ ⎬⎢ ⎥⎟⎜ ⎟⎪ ⎪⎜⎝ ⎠⎢ ⎥⎪ ⎪⎣ ⎦⎪ ⎪⎩ ⎭

2 221 2 exp

2

nu uu n on

⎡ ⎤⎛ ⎞ ⎛ ⎞⎟ ⎟⎜ ⎜⎢ ⎥⎟ ⎟= − + → −⎜ ⎜⎟ ⎟⎢ ⎥⎜ ⎜⎟ ⎟⎜ ⎜⎝ ⎠ ⎝ ⎠⎢ ⎥⎣ ⎦ (38)

as n →∞ . In the last step we have used the elementary identity in calculus,

( )lim 1 n c

nc n e

→∞+ =

for any real c. Equation (38) shows that ( )Y uφ approaches the characteristic function of a zero

mean and unit variance Gaussian distribution in the limit. The proof is thus complete.

The central limit theorem thus indicates that often a physical phenomenon has a Gaussian distribution when it represents the sum of a large number of small independent random effects. This indeed is a model or at least an idealization of many physical phenomena we encounter. Consider a sequence ( ) nX β of n r.v.'s. They are said to be jointly Gaussian if its associated

joint density function has the form ( ) ( )

1 2 1 2, , ,nX X X nf f x x x= =… …X x

( ) ( ) ( )/ 2 1/ 2 112 exp2

n Tπ Λ Λ− − −⎡ ⎤⎢ ⎥= − − −⎢ ⎥⎣ ⎦

x m x m (39)

where

[ ] ( )( ) ( )( ) ( )( )1 2 1 2T

n nm m m E X E X E Xβ β ββ β β⎡ ⎤= = ⎢ ⎥⎣ ⎦… …m (40)

and ijΛ μ⎡ ⎤= ⎢ ⎥⎣ ⎦ is the n n× covariance matrix of ( )βX with

( ) ( )( )ij i i j jE X m X mβμ β β⎡ ⎤⎡ ⎤= − −⎢ ⎥⎣ ⎦ ⎣ ⎦ (41)

Again, we see that a joint Gaussian distribution is completely characterized by the first- and second-order joint moments.

Parallel to our discussion of a single Gaussian r.v., an alternative definition of a sequence of jointly Gaussian distributed random variables is that it has the joint characteristic function

( ) 1exp2

T Tiφ Λ⎧ ⎫⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

X u m u u u (42)

This definition is sometimes preferable since it avoids the difficulties when the covariance matrix Λ becomes singular. The joint moments of ( )βX can be obtained by differentiating the

joint characteristic function ( )φX u and setting 0=u .

Page 28: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

16 CHAPTER 1 BACKGROUND CONCEPTS FROM PROBABILITY

It is clear that, since the joint moments of the first- and second-order completely specify the joint Gaussian distribution, these moments also determine the joint moments of orders higher than two. We can show that, if the r.v.'s ( ) nX β have zero means, all odd order moments of

these r.v.'s vanish and, for n even

( ) ( ) ( ) ( )1 2 2 3 1

1 2

1 2, , ,

n n

n

n m m m m m mm m m

E X X X E X X E X X E X Xβ β β β−

= ⋅ ⋅ ⋅∑…

… … (43)

The sum above is taken over all possible combinations of / 2n pairs of n r.v.'s. The number of terms in the summation is ( ) ( )1 3 5 3 1n n⋅ ⋅ ⋅ ⋅ − ⋅ −… .

1.3.5. Convergence of a Sequence of Random Variables In this section we recall the different types of convergence of a sequence of random variables. These results will be used in the following chapters.

1.3.5.a. Mean-Square Convergence Let ,nX n∈ be a sequence of second-order random variables. The sequence ,nX n∈

converges in mean-square to a second-order random variable X as n →∞ iff

[ ]( )1

2 2lim 0nnE X Xβ

→∞− = (44)

Preposition 3.7 : The sequence ,nX n∈ , of second-order random variables converges in

mean-square to a second-order random variable X such that ( )2E X xβ = , 0x> if and only

if

( ),lim n nn n

E X X xβ′′→∞

⋅ = (45)

Proof : Assume that eq. (45) is true. Then,

( ) ( ) ( ) ( )2 2 0n n n nn n n nE X X E X X E X X E X Xβ β β β′ ′ ′ ′

⎡ ⎤− = ⋅ + ⋅ − ⋅ ⋅ →⎣ ⎦

as ,m m′ →∞ . Therefore ,nX n∈ is a Cauchy sequence. But since the space of second-

order random variables is complete (See LOÈVE, Probability Theory I), any Cauchy sequence of second-order random variables convergences to a second-order random variable. Hence the sequence ,nX n∈ convergences and we have ( )

,lim n nn n

E X X xβ′′→∞

⋅ = .

Conversely, if the sequence ,nX n∈ converges to a second-order random variable, then we

have eq. (45) because any convergent sequence is a Cauchy sequence and therefore

( )2 0 ,n nE X X as n nβ′

⎡ ⎤ ′− → →∞⎣ ⎦ .

Page 29: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

1.3. RANDOM VARIABLES 17

1.3.5.b. Convergence in Probability A sequence ,nX n∈ of random variables defined on a probability space ( )Ω , ,U c ,

convergences in probability, to a random variable X as n →∞ , iff for every 0ε> ,

( ) ( )( )lim 0nnX Xβ β ε

→∞− ≥ =c (46)

Convergence in probability is also called stochastic convergence or convergence in measure. Note that nX X ε− ≥ denotes the subset of U such that ( ) ( ) : nX Xω β β ε∈Ω − ≥ .

1.3.5.c. Almost Sure Convergence

A sequence ,nX n∈ of random variables defined on a probability space ( )Ω , ,U c ,

convergences almost surely (a.s.), to a random variable X as n →∞ , iff

( ) ( )( )lim 0nnX Xβ β

→∞≠ =c (47)

This means that subset ( ) ( ) 0 : lim nnX Xβ β β

→∞Ω = ∈Ω ≠ is −c negligible : ( )0 0Ω =c .

We can also write eq. (47) as

( ) ( )( )lim 1nnX Xβ β

→∞= =c (48)

Convergence −c a.s. is also called convergence almost everywhere, or almost certain convergence, or convergence with probability one.

1.3.5.d. Convergence in Law or Distribution A sequence ,nX n∈ of random variables with probability distributions

nXF , convergences in law (or in probability distribution) to a random variable X with probability distribution XF

if the sequence of bounded positive measures nXF convergences weakly to XF , i.e. if for all

real continuous functions f on such that ( ) 0f x → as x →∞ , we have

( ) ( ) ( ) ( )limnX Xn

f x F dx f x F dx→∞

=∫ ∫ (49)

1.3.5.e. Relationships Between the Four Modes of Convergence Almost Sure Convergence

Mean-Square Convergence

Convergence in Probability

Convergence in Law

Page 30: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

18 CHAPTER 1 BACKGROUND CONCEPTS FROM PROBABILITY

1.4. References ASH, R.B., 1972, Measure, Integration and Functional Analysis. Academic Press. ASH, R.B., 2000, Probability and Measure Theory. Academic Press. ATHANASSOULIS, G.A., 2002, Stochastic Modeling and Forecasting of Ship Systems, Lecture

Notes NTUA. FRISTEDT, B. & GRAY. L., 1997, A Modern Approach to Probability Theory, Birkhäuser. KLIMOV, G., 1986, Probability Theory and Mathematical Statistics, MIR Publishers. KOLMOGOROV, A.N. & FOMIN, S.V, 1975, Introductory Real Analysis. Dover Publications. LOÈVE, M., 1977, Probability Theory I, Springer. PROHOROV, YU.V. & ROZANOV, YU.A., 1969, Probability Theory. Springer. SOBCZYK, K., 1991, Stochastic Differential Equations, Kluwer Academic Publishers. SOIZE, C., 1994, The Fokker-Planck Equation and its Explicit Steady State Solutions, World

Scientific. SOONG, T.T., 1973, Random Differential Equations. Academic Press. VULIKH, B.Z., 1976, A Brief Course in the Theory of Functions of a Real Variable. MIR

Publishers.

Page 31: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

19

Chapter 2 Stochastic Processes – Definition and Classification Random variables or random vectors are adequate for describing results of random experiments which assume scalar or vector values in a given trial. In many physical applications, however the outcomes of a random experiment are represented by functions depending upon a parameter. This parameter can be either time or spatial variable, depending on the application. Typical examples of the first case are ODEs with inherent randomness or random excitation and these cases will mainly concern us in this thesis. In the present chapter we will present the basic definitions and theorems for the description of the above mathematical objects as well as the fundamental theorems that ensure their existence. Moreover we will make a basic classification of the stochastic processes in relation with fundamental properties such as memory and ergodicity. More specifically in Section 2.2.1 the stochastic processes are classified according to the manner in which their present state is dependent upon their past history. We thus have the purely random processes without any memory. In order to obtain a mathematic satisfactory theory of theses process we introduce the concept of generalized stochastic processes. The next step in this kind of classification is the Markov processes that can be described completed form the second order quantities. Moreover, we emphasize the case of independent increment processes and martingale stochastic processes that play important role to the theory of stochastic dynamical systems. In the next Section 2.2.2 the concept of stationarity is presented. We formulate the basic definitions as well as all the necessary results for the direction of stochastic dynamical systems. Stationary stochastic processes are very useful for the description of linear systems since, under suitable assumptions, we can formulate the whole problem in terms of the spectral densities. Section 2.2.3 deals with the important issue of ergodicity, fundamental property for dynamical systems, that relates statistical or ensemble averages of a stationary stochastic process to time averages of its individual sample functions The interchangeability of ensemble and time averages has considerable appeal in practice since we can compute statistical averages of a stochastic process in terms of a long single observation of one sample function. Finally in the last Section 2.2.4 we present the Gaussian stochastic process with its basic properties.

Page 32: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

20 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

2.1. Basic Concepts of Stochastic Processes

The foregoing description of a stochastic process is based upon the axiomatic approach in probability theory, originally due to Kolmogorov. Let us call each realization of a given random experiment a sample function or a member function. This description suggests that a s.p. ( );X t β , t I∈ ⊆ , is a family of sample functions of the variable t, all defined on the

same underlying probability space ( )Ω , ,U c . Intuitively, this sample theoretic approach to

the concept of a stochastic process is physically plausible since a stochastic process describing a physical phenomenon generally represents a family of individual realizations. To each realization there corresponds a definite deterministic function of t. However, a difficult point arises. A stochastic process defined this way is specified by the probability of realizations of various sample functions. This leads to the construction of probability measures on a functional space and it generally requires advanced mathematics in measure theory and functional analysis. We will consider this approach extensively in Chapter 3. On the other hand, Definition 1.1 presents a simpler approach to the concept of a stochastic process which is sufficient for the subsequent development.

2.1.1. Definition of a Continuous-parameter Stochastic Process At a fixed t, a s.p. ( );X t β , t I∈ ⊆ , is a random variable. Hence, another characterization of

a stochastic process is to regard it as a family of random variables, say, ( ) ( )1 2; , ; ,X t X tβ β …depending upon a parameter t I∈ ⊆ . The totality of all the random

variables defines the s.p. ( );X t β . For discrete-parameter stochastic processes, this set of

random variables is finite or countably infinite. For continuous-parameter processes, the number is noncountably infinite. Thus, we see that the mathematical description of a stochastic process is considerably more complicated than that of a random variable. It, in fact, is equivalent to the mathematical description of an infinite and generally noncountable number of random variables.

We recall from Section 1.3 of Chapter 1 that a finite sequence of random variables is completely specified by its joint distribution or joint density functions. It thus follows from the remarks above that a s.p. ( );X t β , t I∈ ⊆ , is defined by a family of joint distribution or

density functions, taking into account that we are now dealing with infinitely many random variables.

Let us now make this definition more precise. The index set I usually is the real line or an interval of the real line. But the above definition can be extended to the case of an index set with n-dimensions.

Definition 1.1 : Let a family of r.v.’s ( );iX t β , it I∈ such that for every finite set 1 2, , , nt t t…

the corresponding set of r.v.'s ( ) ( )1 1; ,X X tβ β= ( ) ( ) ( ) ( )2 2; , , ; ,n nX X t X X tβ β β β= =… has

a well defined joint probability distribution function

Page 33: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.1. DEFINITION OF STOCHASTIC PROECESSES 21

( ) ( ) ( )( )1 2, ,..., 1 2 1 1, ,..., , ,..., ,nt t t n n nF x x x X t x X t xβ β= < <∩ ∩c (1)

that satisfies the following conditions (Kolmogorov compatibility conditions):

a) the symmetry condition: if niii ,...,, 21 is a permutation of numbers n...,,2,1 then for arbitrary 1≥n

( ) ( )1 2 1 21 2, ,..., , ,..., 1 2, ,..., , ,...,

i i i n nnt t t i i i t t t nF x x x F x x x= (2)

b) the consistency condition: for nm < and arbitrary 1,...,m nt t I+ ∈

( ) ( )1 1 1 2,..., , ,..., 1 2 , ,..., 1 2, ,..., , ,..., , ,...,

m m n nt t t t m t t t nF x x x F x x x+

∞ ∞ = (3)

Then this family of joint distribution functions defines a s.p. ( );X t β , t I∈ .

In the special case where ( )1 2, ,..., 1 2, ,...,

nt t t nF x x x is differentiable we can alternative use the

corresponding probability density function defined by

( ) ( )1 2

1 2

, ,..., 1 2, ,..., 1 2

1 2

, ,...,, ,...,

...n

n

nt t t n

t t t nn

F x x xf x x x

x x x∂

=∂ ⋅∂ ⋅ ⋅∂

(4)

This family of probability density functions satisfies the following conditions:

a) the positiveness condition

( )1 2, ,..., 1 2, ,..., 0

nt t t nf x x x ≥ (5)

b) the normalization property

( )1 2, ,..., 1 2 1 2, ,..., ... 1

nt t t n nf x x x dx dx dx∞ ∞ ∞

−∞ −∞ −∞

⋅⋅⋅ ⋅ ⋅ ⋅ ⋅ =∫ ∫ ∫ (6)

Based upon the definition above, another convenient and often useful method of specifying a stochastic process is to characterize it in terms of an analytic formula as a function of containing random variables as parameters. Let ( ) ( ) ( )1 2, , , nA A Aβ β β… be a set of random

variables. A s.p. ( );X t β characterized this way has the general form

( ) ( )1 2; ; , , , nX t g t A A Aβ = … (7)

where the functional form of g is given.

Example 1.2 [Pierson – Longuet Higgins Model] : Representation (7) can be applied for the description of random wind generated, water waves at a specific point. In this case we can determine experimentally the spectral distribution, i.e., the amplitude that corresponds to every frequency. Hence, the stochasticity is introduced from the phase of every harmonic component. More specifically we use the following representation for the random field at a specific point

( ) ( )( )1

; cosN

i i ii

X t A tβ ω ϕ β=

= +∑ (8)

Page 34: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

22 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

where ( ), , 1, ,i iA i Nω = … are given and ( )iϕ β , 1, ,i N= … are independent random

variables, uniformly distributed at [ ]0, 2π . We must note that representation (8) is consistent

with the theory of linear water waves. Moreover it can be easily proved that the above stochastic process is stationary and ergodic. Additionally for large number of harmonic components N its probability distribution functions follow the Gaussian law. As we mention at the beginning a stochastic process can also specified by the probability of realizations of various sample functions. For the following definitions we assume that [ is a measurable, metric space. Definition 1.3 : Let ( )( )Ω , Ω ,U c be a probability space, :X Ω→[ a point function

from the space of elementary events Ω to a metric space [ and ( )U [ a Borel field over

[ . The mapping X is a stochastic process, iff is ( ) ( )( )Ω , [U U -measurable.

Together with the definition of the stochastic process we have, as in the case of random variables the induced probability measure.

Definition 1.4 : Let ( )( )Ω , Ω ,U c be a probability space, ( )( ),[ [U a measurable

space and :X Ω→[ a stochastic process. On the Borel field ( )[U of the measurable

space ( )( ),[ U [ we define the induced probability measure

( ) ( )( ) ( )1 , for every B X B B−= ∈[c c U [ (9)

The produced probability space ( )( ), , [[ [U c , will be called as X − induced

probability space.

To connect the above probability measure defined on the infinite dimensional space ( )( ),[ U [ with the joint probability distribution functions we have the following

definition.

Definition 1.5 : Let ( )( )Ω , Ω ,U c be a probability space, :X Ω→[ a stochastic

process and ( )( ), , [[ [U c the induced probability space. Let NΠ be a projection of

[ with dim N NΠ ⎡ ⎤ =⎣ ⎦[ . For every NΠ we introduce the following probability measure

defined on ( )( ),N NU ,

( ) [ ]( ) [ ] ( ) ( )1 ,:N

NP N NB B x x B BΠ Π−= = ∈ ∈ ∈[ [c c c [ U (10)

Example 1.6 : For the case of ( )C I=[ being the space of real, continuous functions of a

real variable t I∈ , a very important measure is for ( ) ( ) ( ) ( )( )1 2, , ,J Nx t x t x t x tΠ ⎡ ⎤ =⎣ ⎦ … for every

1 2, , , NJ t t t I= ⊂… . For this projection operator we have

( ) ( ) ( ) ( ) ( ) ( ) ( ),:J

X NJC IB x C I x B BΠ Π ⎡ ⎤= ∈ ∈ ∈⎣ ⎦i ic c U (11)

Page 35: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.1. DEFINITION OF STOCHASTIC PROECESSES 23

Now, lets restrict the family of ( )NB∈U to those of the form ( ) ( )1 2, ,B x x= −∞ × −∞ ×

( ), Nx× −∞… . It is easy to see that we get the joint probability distributions

( ) ( ) ( ) ( ) ( ) ( ) ( )1 2, ,..., 1 2, ,..., :n J

Xt t t n JC IF x x x B x C I x BΠ Π ⎡ ⎤= = ∈ ∈⎣ ⎦i ic c (12)

where, ( ) ( )1 2, ,B x x= −∞ × −∞ × ( ), Nx× −∞… .

The above approach will be discussed comprehensively in Chapter 3. To connect the Definition 1.3 with the first Definition 1.1 we can think of [ as the space of real function of a real variable ( )D I . We then have the following

Theorem 1.7 [Kolmogorov Theorem] : For every given family of finite–dimensional distributions satisfying conditions a), b), there exists a probability space ( )Ω , ,U c and a

stochastic process ( );X t β , t I∈ defined on it that possesses the given distribution as finite –

dimensional distributions. Proof : For the proof we refer to KLIMOV, G., ( Probability Theory and Mathematical Statistics. p.109) and ASH, R.B., (Probability and Measure Theory. p. 191).

In what follows we assume the existence of a probability space ( )( )Ω , Ω ,U c . The

stochastic process ( ):X C IΩ→ is defined on it and the induced probability space is

( ) ( )( ) ( )( ), , C IC I C IU c .

2.1.2. Moments As in the case of random variables, some of the most important properties of a s.p. are characterized by its moments, particularly those of the first and the second order. In the sequel, the existence of density functions shall be assumed. The moments of the probability density function ( )tf x , are defined as the expectation

value ( )( )( );n

E X tβ β . We have

( ) ( )( ) ( ); nn nX tR t E X t x f x dxβ β

+∞

−∞

≡ = ∫ (13)

As special cases we have the first order moment

( ) ( ) ( )( ) ( )1 ;X X tR t m t E X t xf x dxβ β+∞

−∞

= = = ∫

which is the mean value of ( );X t β at time t . Additionally the second order moment

( ) ( ) ( )( ) ( )22 2;X X tR t m t E X t x f x dxβ β+∞

−∞

= = = ∫ (14)

Page 36: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

24 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

is the variance of ( );X t β at time t . Analogously we can define n–point moments. We have

( ) ( ) ( ) ( )( )

( )

1 21 2

1 2

1 2

... 1 2 1 2

1 2 , ,..., 1 2 1

, ,..., ; ; ... ;

... , ,..., ...

mm

m

m

n n nn n nXX X m m

nn nm t t t m m

R t t t E X t X t X t

x x x f x x x dx dx

β β β β⋅ ⋅⋅⋅

+∞

−∞

≡ ⋅ ⋅ ⋅ =

= ⋅ ⋅ ⋅∫ (15)

A special case is the autocorrelation function,

( ) ( ) ( )( )1 2 1 2, ; ;XXR t t E X t X tβ β β≡ ⋅ (16)

One very important moment of a stochastic process ( );X t β is the autocovariance function

defined by

( ) ( ) ( ) ( ) ( )( )1 2 1 1 2 2, ; ;XXC t t E X t m t X t m tβ β β⎡ ⎤ ⎡ ⎤≡ − ⋅ −⎣ ⎦ ⎣ ⎦ (17)

The importance of the correlation functions or, equivalently, the covariance functions rests in part on the fact that their properties define the mean square properties of the stochastic processes associated with them. This will become clear in Chapter 4. In what follows, we note some of the important properties associated with these functions.

1. ( ) ( )1 2 2 1, ,XX XXR t t R t t= for every 1 2,t t I∈ (18)

2. ( ) ( ) ( )21 2 1 1 2 2, , ,XX XX XXR t t R t t R t t≤ ⋅ for every 1 2,t t I∈ (19)

3. ( )1 2,XXR t t is positive–definite on I I× . (20)

2.1.3. Characteristic Functions Following the definition of a characteristic function associated with a sequence of random variables, the characteristic functions of a s.p. ( ); ,X t t Iβ ∈ , are defined in an analogous way.

There is, of course, a family of characteristic functions since ( );X t β is characterized by a

family of density functions. Again, we assume the existence of the density functions in what follows. The characteristic function ( )

1tuφ is defined in the following manner for the one dimensional

case

( ) ( ) ( )iux iuxt tu E e e f x dxβφ

+∞

−∞

≡ = ∫ (21)

That is, ( )1 uφ is the Fourier transform of ( )1f x . By setting 0u = we can note immediately

that

( ) ( )1 1

0 1t tf x dxφ+∞

−∞

= =∫ (22)

Page 37: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.1. DEFINITION OF STOCHASTIC PROECESSES 25

The characteristic functional may be defined for n variables 1 2, ,..., nu u u analogously. Here

( ) ( )

( )

1 2

1 2

, ,..., 1 21

, ,..., 1 2 11

, ,..., exp ;

... exp , ,..., ...

n

n

n

t t t n k kk

n

k k t t t n nk

u u u E i X t u

i x u f x x x dx dx

βφ β=

+∞ +∞ +∞

=−∞ −∞ −∞

⎛ ⎞⎡ ⎤≡ ⋅ =⎜ ⎟⎢ ⎥

⎣ ⎦⎝ ⎠

⎡ ⎤= ⋅ ⋅⎢ ⎥

⎣ ⎦

∑∫ ∫ ∫ (23)

Here are some of the basic properties of the characteristic function of a stochastic process:

1) ( ) ( )1 2 1 2, ,..., 1 2 , ,...,, ,..., 0,0,...,0 1

n nt t t n t t tu u uφ φ≤ =

2) ( ) ( )1 2 1 2, ,..., , ,...,n nt t t t t tφ φ= −u u

3) ( ) ( )1 2 1 2 1, ,..., 1 2 , ,..., 1 2 1, ,...,0 , ,...,

n nt t t t t t nu u u u uφ φ− −=

4) ( )1 2, ,..., nt t tφ u is uniformly continuous for each n∈u

5) ( )1 2, ,..., nt t tφ u is a positive–definite function

The extension of φ to infinitely many variables involves the concept of a functional and will be considered in Chapter 3. There is an intimate connection between the moments and derivatives of φ with respect to u . Following the analysis we made for random variables at Section 1.3 of Chapter 1 we will have the similar result

( ) ( )1

1 2

1 21 1... 1 21, ,..., 0,0, ,0

mm

nm m

n nn n nXX X n t t tn n nnR t t t

i u uφ

+ +⋅ ⋅⋅⋅

+ +

∂= ⋅

∂ ∂

…… ……

. (24)

Page 38: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

26 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

2.2. Classification of Stochastic Process

In what follows we assume the existence of a probability space ( )( )Ω , Ω ,U c . The

stochastic process ( ):X C IΩ→ is defined on it and the induced probability space is

( ) ( )( ) ( )( ), , C IC I C IU c .

We have seen that a s.p. ( );X t β , t I∈ , is defined by a system of probability distribution

functions. This system consists of

( ) ( ) ( )( )1 2, ,..., 1 2 1 1, ,..., , ,..., ,nt t t n n nF x x x X t x X t xc β β= < <∩ ∩

for every n and for every finite set 1 2, , , nt t t… in the fixed index set I. In general, all

probability distribution functions are needed to specify a stochastic process completely. It is not uncommon, however, that stochastic processes modelling physical phenomena are statistically simpler processes, simple in the sense that all the statistical information about a given process is contained in a relatively small number of probability distribution functions. The statistical complexity of a stochastic process is determined by the properties of its distribution functions and, in the analysis of stochastic processes, it is important to consider them in terms of these properties.

We consider in the remainder of this chapter classification of stochastic processes according to the properties of their distribution functions. Presented below are two types of classifications that are important to us in the analysis to follow, namely, classification based upon its memory and classification based upon the statistical regularity of a process over the index set I. We note that these two classifications are not mutually exclusive and, in fact, many physically important processes possess some properties of each. 2.2.1. Classification Based Upon Memory In this classification, a s.p. ( );X t β , t I∈ , is classified according to the manner in which its

present state is dependent upon its past history. It is centred around one of the most important classes of stochastic processes, that is, the Markov processes. 2.2.1.a. Purely Stochastic Process – Generalized Stochastic Process According to memory properties, the simplest s.p. is one without memory. They characterized by the following Defintion 2.1 : A s.p. ( );X t β , t I∈ is fully stochastic when the random variables ( )1 ;X t β

and ( )2 ;X t β are independent for every 1 2t t≠ .

From the above definition it is clear that the nth order distribution function can be written as product of first order distributions.

Page 39: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.2. CLASSIFICATION OF STOCHASTIC PROECESSES 27

( ) ( )1 2, ,..., 1 2

1

, ,...,n i

n

t t t n t ii

F x x x F x=

=∏ (1)

The probability structure of the fully stochastic process is characterized from the first order distribution only. Although mathematically simple, a continuous-parameter purely stochastic process is not physically realizable because it implies absolute independence between the present and the past no matter how closely they are spaced. In physical situations we certainly expect ( )1 ;X t β to be dependent on ( )2 ;X t β when 1t , is sufficiently close to 2t . In practical

applications, therefore, purely stochastic processes with a continuous parameter are considered to be limiting processes. In order to obtain a mathematic satisfactory theory of the process just described one has to introduce the concept of generalized s.p. Let ( )cC I∞ be the space of all infinitely differentiable functions ( ) :t Iϕ → vanishing identically outside a finite closed interval; it is a space of test functions. Let m be a Hilbert

space of random variables defined on ( )( )Ω , Ω ,U c having finite second moment.

Defintion 2.2 : A generalized s.p. on I is a continuous linear mapping Φ from ( )cC I∞ to m ;

the value of a generalized stochastic process Φ at ϕ will be denoted by ,ϕ Φ or ( )ϕΦ .

Example 2.3 : Let ( );X t β , t I∈ be a s.p. with finite second moment (a mapping from I to

m ). As we shall see in Chapter 4 , for a very wide class of processes and for ( ) ( )ct C Iϕ ∞∈

there exists the following integral

( ) ( ) ( ), ;X XS

t X t dtϕ ϕ ϕ βΦ =Φ = ∫ (2)

where S is an arbitrary compact set in I ( ( );X t β is a locally integrable stochastic process on

I ). The above mapping is linear and continuous. Therefore, every locally integrable s.p. on I determines a generalized s.p. in the form presented above.

One of the important advantages of a generalized stochastic process is the fact that its derivative always exists and is itself a generalized s.p.

Defintion 2.4 : The derivative ′Φ with respect to t of a generalized s.p. Φ in ( )cC I∞ is

defined by means of the formula

, ,ddtϕ

ϕ⎧ ⎫⎪ ⎪⎪ ⎪′Φ = − Φ⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

for all ( ) ( )ct C Iϕ ∞∈ (3)

Remark 2.5 : It can be verified that the mapping ,ddtϕ

ϕ⎧ ⎫⎪ ⎪⎪ ⎪→ Φ⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

is a continuous linear mapping

from ( )cC T∞ to m . The derivative ( )αΦ for any given 1α≥ are given by

( ) ( ), 1 ,ddt

ααα

α

ϕϕ

⎧ ⎫⎪ ⎪⎪ ⎪Φ = − Φ⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭ (4)

Remark 2.6 : It is clear from Example 2.3 that the appropriate definition of , Xϕ ′Φ is

Page 40: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

28 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

( ) ( ), ;XS

t X t dtϕ ϕ β′ ′Φ =−∫ (5)

From the above it follows that an ordinary process ( );X t β may also be regarded as always

differentiable, provided its derivative be a generalized process instead of an ordinary process. From the standpoint of correlation theory, a generelazed s.p. Φ or ( )ϕΦ is characterized by its

mean

( ) ( )( )m Eβϕ ϕΦ = Φ (6)

and correlation operator

( ) ( ) ( )( ) ( ), , cC E C Iβϕ ψ ϕ ψ ϕ ψ ∞= Φ Φ ∈ (7)

which is assumed to be linear and continuous with respect to ϕ and ψ .

A typical example of a generalized stochastic process is Gaussian white noise ξ .

Definition 2.7 : A Gaussian white noise ξ or ( )ξ ϕ is a generalized s.p. such that there exist a

Wiener process ( );W t β t I∈ satisfying ( ) ( ); ;t W tξ β β′= , i.e.

, , ,dW Wdtϕ

ϕ ξ ϕ⎧ ⎫⎪ ⎪⎪ ⎪′= =−⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

for all ( ) ( )ct C Iϕ ∞∈ (8)

Note that for every ( ) ( )ct C Iϕ ∞∈ , ,ϕ ξ is a Gaussian r.v., and that for every finite number of

functions 1 2, , , nϕ ϕ ϕ… in ( )cC I∞ , the random variables , ; 1i i nϕ ξ ≤ ≤ are jointly Gaussian. This is the reason why ξ is called a Gaussian white noise.

Theorem 2.8 : If ξ is a white Gaussian noise, then

( )( ) 0Eβ ξ ϕ = for all ( ) ( )ct C Iϕ ∞∈ (9)

and ( ) ( ) ( ) ( ), , , c

T

C t t dt C Iξ ϕ ψ ϕ ψ ϕ ψ ∞= ∈∫ (10)

Proof :

( )( ) ( ) ( ) ( )( ), 0T T

d dE E E W t dt E W t dtdt dt

β β β βϕ ϕξ ϕ ϕ ξ

⎛ ⎞⎟⎜ ⎟⎜= = − =− − =⎟⎜ ⎟⎟⎜⎝ ⎠∫ ∫

since ( )( ) 0E W tβ = for all t I∈ and ϕ is arbitrary. To show the second equality, let

( ), cC Iϕ ψ ∞∈ . Then

( ) ( ) ( ) ( )1 1 2 2, , ,T T

d dC E E W t dt W t dtdt dt

β βξ

ϕ ψϕ ψ ϕ ξ ψ ξ

⎛ ⎞⎟⎜ ⎟⎜= ⋅ = − ⋅ − =⎟⎜ ⎟⎟⎜⎝ ⎠∫ ∫

( ) ( )( ) ( )1 2 1 2 1 2 1 21 2 1 2

min ,T T T T

d d d dE W t W t dt dt t t dt dtdt dt dt dt

βϕ ψ ϕ ψ= ⋅ =∫ ∫ ∫ ∫

After integrating by parts we obtain the desired equality.

Page 41: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.2. CLASSIFICATION OF STOCHASTIC PROECESSES 29

Remark 2.9 : It is worth noting that the formula (10) can be put in the form

( ) ( ) ( ) ( ) ( ), , , cT T

C t s t t dsdt C Iξ ϕ ψ δ ϕ ψ ϕ ψ ∞= − ∈∫ ∫ (11)

which implies that the correlation function of the Gaussian white noise is the generalized function ( )t sδ − .

2.1.2.b. Markov Stochastic Process

The Markov s.p., is a subset of the wide class of s.p. which characterized only from the second order distribution function. More specifically, for the Markov s.p., the random variables defined for all the values of t τ> are independent from the random variables defined for t τ< if we know the value of the stochastic process at time τ . The above property is expressed at the following

Defintion 2.10 : A s.p. ( );X t β , t I∈ is Markov, if for every n and every 1 2 nt t t< < <… in

I we have

( ) ( )1 1 2 2 1 1 1 1| , ; , , , , | ,n nt n n n n n t n n nF x x t x t x t F x x t− − − − − −=… (12)

or equivalently,

( ) ( ) ( ) ( ) ( )1 1 2 2 1 1 1 1| , ; , , , , | , ,n n

X Xn n n n n nt tB x t x t x t B x t BΠ Π− − − − − −= ∈…c c U (13)

where ( )ntΠ is the projection operator ( ) ( ) ( )n nt x x tΠ ⎡ ⎤ =⎣ ⎦i for every nt .(See Example 1.6.

of Section 2.1)

Remark 2.11 : The equivalent definition, presented above, can be used to generalize the concept of Markov s.p. to the case of a general Hilbert space, whose elements, are defined on some ordered set.

As we can easily see, the conditional distribution of ( );nX t β for given values of

( ) ( )1 2; , ; ,X t X tβ β ( )1, ;nX t β−… depends only at the most recent known value ( )1 ;nX t β− .

The above definition implies that the basic characteristics of a Markov Process are described by a function

( ) ( ), ; , | ,M tF x t y s F x y s= (14)

The function ( ), ; ,MF x t y s is called a transition probability. It gives the probability of a

transition of a process from a state y at time s to one of the states for which ( ) ;X t xβ < at

time t . We can easily see that for fixed , ,t s y the transition probability ( ), ; ,MF x t y s is a

probability distribution function.

Proposition 2.12 : The probability structure of a Markov s.p. is completely characterized by the second order probability distribution functions.

Page 42: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

30 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

Proof: To see that a Markov process is completely specified by its first and second distribution functions, we start with the general relation ( )

1 2, ,..., 1 2, ,...,nt t t nF x x x =

( ) ( )

( ) ( )1

2 1

1 1 2 2 1 1 1 2 2 1 1

2 1 1 1

| , ; , , , , | , , , ,

| ,n nt n n n n n t n n n

t t

F x x t x t x t F x x t x t

F x x t F x−− − − − − − −= ⋅ ⋅

⋅ ⋅ ⋅

… …

… (15)

Using the definition of the Markov s.p. we immediately see that for 3n≥ and 1 2 nt t t< < <…

( ) ( ) ( )1 2 1 1

1

, ,..., 1 2 1 11

, ,..., | ,n i

n

t t t n t t i i ii

F x x x F x F x x t+

+=

= ⋅∏ (16)

But,

( ) ( )( )

1

1

, 11

,| , i i

i

i

t t i it i i i

t i

F x xF x x t

F x+

+

++ = (17)

Consequently,

( )( )

( )

1

1 2

1

, 11

, ,..., 1 2 1

2

,, ,...,

i i

n

i

n

t t i ii

t t t n n

t ii

F x xF x x x

F x

+

+=

=

=∏

∏ (18)

Proposition 2.13 : Let 1 2 3t t t< < . For the transition probability density function the following Smoluchowski-Chapman-Kolmongorov (SCK) holds,

( ) ( ) ( )3 3 1 1 3 3 2 2 2 2 1 1 2, ; , , ; , , ; ,M M Mf x t x t f x t x t f x t x t dx+∞

−∞

= ∫ (19)

or equivalently,

( ) ( ) ( )3 3 1 1 3 3 2 2 2 2 1 1, ; , , ; , , ; ,M M MF x t x t F x t x t dF x t x t+∞

−∞

= ∫ (20)

Proof: Consider the general relation

( ) ( )1 3 1 2 3, 1 3 , , 1 2 3 2, , ,t t t t tf x x f x x x dx

+∞

−∞

= ∫ (21)

Let 1 2 3t t t< < . Then using the relation

( ) ( ) ( ) ( )1 2 3 3, , 1 2 3 1 1 2 2 3 3 2 2 3 3 3, , , | , ; , , | ,t t t tf x x x f x t x t x t f x t x t f x= ⋅

and equations (15) we clearly have

( ) ( ) ( ) ( )1 2 3 3, , 1 2 3 1 1 2 2 2 2 3 3 3, , , | , , | ,t t t tf x x x f x t x t f x t x t f x= ⋅ ⋅

Then making use of equation (21) we have the desired equation. In the analysis of Markov Processes and in modeling of real random phenomena we frequently deal with so called stationary transition probabilities characterizing a homogeneous Markov s.p.

Page 43: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.2. CLASSIFICATION OF STOCHASTIC PROECESSES 31

Definition 2.14 : A Markov s.p. ( );X t β , t I∈ is said to be homogeneous (with respect to

time) if for arbitrary s I∈ and t I∈ , s t< , the transition probability depends only on the difference τ = t s− , that is

( ) ( ), ; , , ;M HMF x t y s F x yτ= (22)

The function ( ), ;HMF x yτ determines the probability of transition from a state y at time s to

one of the states for which ( ) ;X t xβ < at time t . This probability does not depends on the

position of s , but only from the difference t s− . In this case the Smoluchowski-Chapman-Kolmongorov equation takes the form

( ) ( ) ( )1 1 2 3 1 1 2 2 2 3, ; , ; , ;HM HM HMF x x F x x dF x xτ τ τ τ+∞

−∞

+ = ∫ (23)

The SCK equation has important implications in the theory of Markov s.p.; among others, as it can be shown (SOBCZYK, K., Stochastic Differential Equations. p. 31), it implies the existence of a semi – group of operators associated with a homogeneous Markov s.p..

Preposition 2.15 : Let the homogeneous Markov s.p. ( );X t β , t I∈ with transition probability

( ), ;HMF x yτ . For every bounded, measurable, function f defined on the state space R ⊂ (the set of possible values of the s.p.) of the process we define the operator tT acting on the functions f as

( ) ( ) ( ), ;t HMR

T f f x dF x t y⎡ ⎤ =⎣ ⎦ ∫i (24)

then, a) Operators tT are linear, positive and continuous. b) Operators tT form a semi–group, i.e.

, 0t s t s s tT T T T T s t+ = = > . (25)

Proof: For the proof of a) we refer to (SOBCZYK, K., Stochastic Differential Equations. p. 31). Part b) can be proved using the form (23) of Smoluchowski-Chapman-Kolmongorov equation. 2.1.3.c. Processes with Independent Increments Mathematical models of numerous real phenomena can be described by stochastic processes whose increments are independent random variables.

Definition 2.16 : A s.p. ( );X t β , t I∈ is said to be an independent–increment process, if for

every n and every 1 2 nt t t< < <… in I , the random variables,

( ) ( ) ( ) ( ) ( )1 2 1 1; , ; ; , , ; ;n nX t X t X t X t X tβ β β β β−− −…

are independent.

Page 44: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

32 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

The definition above implies that all finite–dimensional distributions of the process ( );X t β are

completely determined by the distribution of the random variable ( );X t β for each t and the

distributions of the increments ( ) ( )2 1; ;X t X tβ β− for all possible values of 1 2,t t , such that

1 2t t< . However, as we shall show at the next preposition, the distributions of the of the increments ( ) ( )2 1; ;X t X tβ β− can be expressed in terms of the distributions of ( );X t β . So,

we have the following,

Remark 2.17 : For the special case of an independent increment process, where ( )0; 0X β =

almost surely we will have ( ) ( ) ( )( ) ( ) ( )( ); ; ; ; 0;X t X t X s X s Xβ β β β β= − + −

where ( ) ( )( ); ;X t X sβ β− and ( ) ( )( ); 0;X s Xβ β− are independent random variables.

Consequently stochastic process ( );X t β verifies the Markov property.

Preposition 2.18 : Let the independent–increment s.p. ( );X t β , t I∈ . Then all the finite–

dimensional distributions of the process ( );X t β can be expressed in terms of the one

dimensional distribution of the random variable ( );X t β for every fixed t.

Proof : We will prove the above assertion in two steps. First we have for the general finite dimensional characteristic function

( ) ( )1 2, ,..., 1 2

1, ,..., exp ;

n

n

t t t n k kk

u u u E i X t uβφ β=

⎛ ⎞⎡ ⎤≡ ⋅ =⎜ ⎟⎢ ⎥

⎣ ⎦⎝ ⎠∑

( ) ( ) ( )1 2 11 2

exp ; ; ; ...n n

i ii i

E i u X t i u X t X tβ β β β= =

⎛ ⎡ ⎛ ⎞ ⎛ ⎞ ⎡ ⎤= + × − + +⎜ ⎢ ⎜ ⎟ ⎜ ⎟ ⎣ ⎦⎜ ⎝ ⎠ ⎝ ⎠⎣⎝∑ ∑

( ) ( )1; ;n

i n ni n

i u X t X tβ β−=

⎞⎤⎡ ⎤+ × − =⎟⎥⎣ ⎦⎦ ⎠∑

( ) ( ) ( )1 1 2 11 2 , 2 ,... ...

n nt n t t n t t nu u u u u uφ φ φ−

= + + + ⋅ + + ⋅ ⋅… (26)

Let us now take two arbitrary time instants 1 2t t< . It is clear that

( ) ( ) ( ) ( )2 1 2 1; ; ; ;X t X t X t X tβ β β β⎡ ⎤= + −⎣ ⎦ , i.e. ( )2 ;X t β is a sum of two independent random

variables. Hence the following relation holds for the characteristic functions,

( ) ( )( )

1

1 2

2

,t

t tt

uu

φφ

= (27)

for each 1 2t t< . Using the eq.(26), we have

( ) ( ) ( )( )

( )( )

11

1 2 1

2

2, ,..., 1 2 1 2

2

..., ,..., ...

...n

n

n

t nt nt t t n t n

t n t n

uu uu u u u u u

u u uφφ

φ φφ φ

−+ +

= + + + ⋅ ⋅ ⋅+ +

… (28)

Remark 2.19 : It should be noticed that the one dimensional distribution of the independent–increment s.p. cannot be given arbitrary. The one dimensional characteristic function has to be such that ( )

1 2,t t uφ given by (27) is also a characteristic function.

An important class of independent–increment s.p. is formed by the processes with orthogonal increments. The Riemann–Stieljes integrals with respect to process with orthogonal increments

Page 45: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.2. CLASSIFICATION OF STOCHASTIC PROECESSES 33

are particularly important in the study of stationary stochastic processes. Moreover, as we shall prove at the sequel every stochastic process has orthogonal increments if and only if is a martingale process (Preposition 2.39).

Defintion 2.20 : A s.p. ( );X t β , t I∈ is said to be an orthogonal increment process, if for every t s u v< ≤ < in I ,

( ) ( )( )2; ;E X t X sβ β β⎡ ⎤− <∞⎣ ⎦ (29)

and ( ) ( ) ( ) ( )( ); ; ; ; 0E X t X s X u X vβ β β β β⎡ ⎤ ⎡ ⎤− ⋅ − =⎣ ⎦ ⎣ ⎦ (30)

Remark 2.21 : From the above definition we can easily see that an independent–increment s.p. that fulfils eq. (29) and has constant mean value, i.e. ( ) const.Xm t = is also an orthogonal

increment process.

We now define the function

( )( ) ( )( )( ) ( )( )

20 0

0 20 0

; ; ,;

; ; ,

E X t X t t tF t t

E X t X t t t

β

β

β β

β β

⎧⎪ ⎡ ⎤− ≥⎪ ⎣ ⎦⎪⎪=⎨⎪ ⎡ ⎤⎪− − <⎪ ⎣ ⎦⎪⎩

(31)

Clearly, we have ( ) ( )0 0; ;F t t F t t=− for every 0,t t I∈ .

Theorem 2.22 : Let the orthogonal increment s.p. ( );X t β , t I∈ . Then for the variance of the

increment ( ) ( )1 2; ;X t X tβ β− for every 1 2t t< with 1 2,t t I∈ the following relation holds

( ) ( ) ( ) ( )( )21 2 1 2; ; ; ; ,F t t F t t E X t X t t Iβ β β⎡ ⎤= + − ∈⎣ ⎦ (32)

Proof : Without any loss of generality we can assume 1 2t t< . Under this assumption we have three cases. Indeed, consider the new constant 1t . This constant can be choose such that

1 0t t t< < or 0 1t t t< < . We examine the first case. (For the sequel we will denote ( ); iF t t as

( )iF t )

Case I) :

( ) ( ) ( )( )( ) ( ) ( ) ( )( )( ) ( )( ) ( ) ( )( )( ) ( ) ( ) ( )( )

( ) ( ) ( )( ) ( ) ( )( )( ) ( ) ( )( )

21 1

21 2 2

2 22 2 1

2 2 1

2 22 2 1 2 1

22 2 1

; ;

; ; ; ;

; ; ; ;

2 ; ; ; ;

; ; 2 ; ;

; ;

F t E X t X t

E X t X t X t X t

E X t X t E X t X t

E X t X t X t X t

F t E X t X t E X t X t

F t E X t X t

β

β

β β

β

β β

β

β β

β β β β

β β β β

β β β β

β β β β

β β

⎡ ⎤=− − =⎣ ⎦

⎡ ⎤=− − + − =⎣ ⎦

⎡ ⎤ ⎡ ⎤=− − − − −⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤− − ⋅ −⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤= − − + − =⎣ ⎦ ⎣ ⎦

⎡ ⎤= + −⎣ ⎦

t1t 2t

Page 46: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

34 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

Case II) :

( ) ( ) ( )( )( ) ( ) ( ) ( )( )( ) ( )( ) ( ) ( )( )( ) ( ) ( ) ( )( )

( ) ( )( ) ( ) ( )( )( ) ( ) ( ) ( )( )

21 1

21 2 2

2 22 2 1

2 2 1

2 22 2 1

2 2

; ;

; ; ; ;

; ; ; ;

2 ; ; ; ;

; ; ; ;

2 ; ; ; ;

F t E X t X t

E X t X t X t X t

E X t X t E X t X t

E X t X t X t X t

E X t X t E X t X t

E X t X t X t X t

E X t

β

β

β β

β

β β

β

β

β β

β β β β

β β β β

β β β β

β β β β

β β β β

⎡ ⎤= − =⎣ ⎦

⎡ ⎤= − + − =⎣ ⎦

⎡ ⎤ ⎡ ⎤= − − +⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤+ − ⋅ −⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤= − − +⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤− − ⋅ −⎣ ⎦ ⎣ ⎦

=− ( ) ( )( ) ( ) ( )( )( ) ( ) ( )( )

2 22 2 1

22 2 1

; ; ; ;

; ;

X t E X t X t

F t E X t X t

β

β

β β β β

β β

⎡ ⎤ ⎡ ⎤− + − =⎣ ⎦ ⎣ ⎦

⎡ ⎤= + −⎣ ⎦

Case III) :

( ) ( ) ( )( )( ) ( ) ( ) ( )( )( ) ( )( ) ( ) ( )( )( ) ( ) ( ) ( )( )

( ) ( ) ( )( )

21 1

21 2 2

2 22 2 1

2 2 1

22 2 1

; ;

; ; ; ;

; ; ; ;

2 ; ; ; ;

; ;

F t E X t X t

E X t X t X t X t

E X t X t E X t X t

E X t X t X t X t

F t E X t X t

β

β

β β

β

β

β β

β β β β

β β β β

β β β β

β β

⎡ ⎤= − =⎣ ⎦

⎡ ⎤= − + − =⎣ ⎦

⎡ ⎤ ⎡ ⎤= − − +⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤+ − ⋅ −⎣ ⎦ ⎣ ⎦

⎡ ⎤= + −⎣ ⎦

We thus have complete the proof. Hence, we see that the change of the parameter 0t change the function ( )0;F t t only by a

positive constant. For the sequel we will denote ( )0;F t t as ( )F t and we will call it as the

characteristic of ( );X t β . From the above theorem we can easily see that the variance of the

increment ( ) ( )1 2; ;X t X tβ β− can be described by

( ) ( )( ) ( ) ( )2

1 2 2 1 1 2; ; ,E X t X t F t F t t t Iβ β β⎡ ⎤− = − < ∈⎣ ⎦ (33)

Thus, for every orthogonal increment process we have

1t 2t t

1t t 2t

Page 47: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.2. CLASSIFICATION OF STOCHASTIC PROECESSES 35

Preposition 2.23 [Levy property] : Let the s.p. ( );X t β t I∈ with orthogonal increments and

1 2 na t t t b= < < < =… an arbitrary partition of the interval [ ],a b and ( )1max i it tΔ += − .

Then,

( ) ( ) ( ) ( )1 2

1Δ 0 0

l.i.m. ; ;n

n ni i

in

X t X t F b F aβ β−

+→=→∞

⎡ ⎤− = −⎢ ⎥⎣ ⎦∑ (34)

Now we are going to describe some specific cases of s.p.’s described above. The most important examples of independent–increment s.p.’s are the Wiener process and the Poisson process. For a complete study of processes with independent increments we refer to PUGACHEV, V.S. & SINITSYN, (Stochastic Differential Systems).

a) Wiener Process Definition 2.24 : A s.p. ( );W t β t I∈ is called a Wiener process or Brownian motion process

if: a) ( ) ( ) ( ) ( ) ( ) ( ) ( )0 0 0 0 1:X

t C I x C I xΠ = = ∈ ∈ =ic c

b) for every n and every 1 2 nt t t< < <… in T , the increments ( ) ( )2 1; ; , ,W t W tβ β− …

( ) ( )1; ;n nW t W tβ β−− are independent

c) for arbitrary t and 0h> the increment ( ) ( ); ;W t h W tβ β+ − has a Gaussian

distribution with

( ) ( )( ); ;E W t h W t hβ β β μ+ − =

( ) ( )( )2 2; ;E W t h W t hβ β β σ⎡ ⎤+ − =⎣ ⎦

where μ a real constant, called drift coefficient and 2σ a positive constant, called variance. Without restriction one can assume that 0μ= and 2 1σ = . In what follows we consider this case (standard Wiener process).

Preposition 2.25 : The standard Wiener s.p. ( );W t β t I∈ has the following properties:

a) ( );W t β is a centred, second–order, Gaussian s.p., i.e.

( )( ); 0E W tβ β = and ( )( )2;E W tβ β⎡ ⎤ <∞⎣ ⎦ (35)

This process is not stationary and its covariance function is written as

( ) ( ) ( )( ) ( )1 2 1 2 1 2 1 2, ; ; min , , ,WWC t t E W t W t t t t t Iβ β β= ⋅ = ∈ (36)

b) ( );W t β is a Markov s.p. whose system of transition probabilities is homogeneous and

for 0τ> , is written as

( ) ( )21 1, ; exp

22

z

HM

y xF x z dyτ

τπτ−∞

⎧ ⎫−⎪ ⎪= − ⋅⎨ ⎬⎪ ⎪⎩ ⎭

∫ (37)

c) ( );W t β is mean–square continuous on I .

d) The joint probability density function has the form

Page 48: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

36 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

( ) ( )( )1 2

21

, , , 1 21 11 1

1 1, , , exp2 2n

nni i

t t t ni ii i i i

x xf x x x

t t t tπ−

= =− −

⎧ ⎫−⎪ ⎪= − ⋅⎨ ⎬− −⎪ ⎪⎩ ⎭∑ ∏… …W (38)

Proof : We only give some elements of the proof herein. For more details we refer to SOIZE, C. (Mathematical Methods in Signal Analysis). For the proof of point a) we can write for 0t > ,

( ) ( ) ( )( ) ( ); ; 0; 0;W t W t W Wβ β β β= − +

We then use Definition 2.24. Point b) can be proved by direct application of Remark 2.17 and using the Gaussian structure proved at point a). The proof of point c) is obtained by calculating the mean square norm of the difference, i.e. by direct application of m.s.-continuous definition. For point d) we notice that the vector ( ) ( )( )1 ; , , ;nW t W tβ β… is a non-degenerate linear

transformation of the Gaussian vector ( ) ( ) ( ) ( )( )1 0 1; ; , , ; ;n nW t W t W t W tβ β β β−− −… . For the

last vector the density is given by

( )( )1 2

2

, , , 1 21 11 1

1 1, , , exp2 2n

nni

t t t ni ii i i i

xf x x xt t t tπ= =− −

⎧ ⎫= − ⋅⎨ ⎬− −⎩ ⎭

∑ ∏… …ΔW (39)

Hence after some calculus we get the desired result.

The fundamental properties of the realizations of a Wiener s.p. are summarized in the following

Theorem 2.26 : Let the Wiener s.p. ( );W t β t I∈ . Almost all realizations (sample functions)

of ( );W t β , are continuous, nowhere differentiable and have unbounded variation in every

finite interval.

Proof : The continuity follows from the Kolmogorov criterion which will be formulated in Section 2/Chapter 4 (see Example 2.8/Chapter 4). The non–differentiability for fixed t can be made clear as follows: the distribution of the difference quotient

( ) ( )1 ; ;W t h W th

β β⎡ ⎤+ −⎣ ⎦

is the Gaussian distribution 10,h

⎛ ⎞⎟⎜ ⎟⎜ ⎟⎜⎝ ⎠N which diverges as 0h → ; therefore this quotient can not

converge with positive probability to a finite random variable in any probabilistic sense. The last assertion of the theorem follows from Preposition 2.23 and some additional considerations connected with almost sure convergence.

b) Poisson process Definition 2.27. A s.p. ( );N t β t I∈ is called a non–homogeneous Poisson process if

a) the s.p. ( );N t β has independent increments

b) ( ) ( ) ( ) ( ) ( ) ( ) ( )0 0 0 0 1:Nt D I N D I NΠ = = ∈ ∈ =ic c

Page 49: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.2. CLASSIFICATION OF STOCHASTIC PROECESSES 37

c) for arbitrary 1 2,t t the difference ( ) ( )1 2; ;N t N tβ β− have a Poisson distribution, that is

there exist a function ( )tλ that for each integer number 0k >

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

1 2

1 2

1 2,

1 2

!

:Nt t D I

kt t

k N D I N t N t k

t te

k

Π

λ λλ λ −⎡ − ⎤⎣ ⎦

= ∈ − ∈ =

⎡ ⎤−⎣ ⎦=

ic c

If ( ) , 0t tλ ν ν= > then the process ( );N t β is called a homogeneous Poisson process.

It is clear that the processes ( );N t β for each t I∈ are random variables with Poisson

distribution

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

!:

k

tNt D I

tk N D I N t k e

Π

λ −⎡ ⎤⎣ ⎦= ∈ ∈ =ic c (40)

Preposition 2.28 : Let ( );N t β be a Poisson process with mean value ( )tλ . Then ( );N t β is a

Markov process with transition probability

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( ), !

k

t sNt s D I

t sk N t N s k e

kλ λ

Π

λ λ − −⎡ ⎤−⎣ ⎦= − ∈ =c c (41)

For the case when ( )t tλ = we have a homogeneous Poisson process and the homogenous

transition probability has the form

( ) ( ) ( ) ( ) ( ) ( ) ( ), !

kN t s

t s D I

t sk N t N s k e

kΠ− +−

= − ∈ =c c (42)

Proof : Direct application of Remark 2.17 and equation (14).

The following assertions characterize the basic properties of the Poisson process.

Theorem 2.29 : For an homogeneous Poisson process with intensity ν , the interarrival times 1 2, , , nt t t… are independent and identically distributed random variables with the common

distribution being exponential with parameter ν .

Proof : For the proof we refer to SNYDER, D.L., (Random Point Processes).

Remark 2.30 : It is however, worthy adding that interarrival times for an inhomogeneous Poisson process are not independent.

The fundamental properties of the realizations of a Poisson s.p. are summarized in the following

Theorem 2.31 : Almost all sample functions of the Poisson process are non-decreasing with integer increments; they increase by jumps equal to one and the number of jumps in every finite interval is finite.

Proof : For the proof we refer to SNYDER, D.L., (Random Point Processes).

Page 50: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

38 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

2.1.3.d. Martingales An important class of processes is formed by martingales. We first recall the definition of a current of σ−algebras of a probability space. Let I denotes the set of real numbers corresponding to the times at which the stochastic experiments were carried out.

Definition 2.32 : Let the probability space ( )( )Ω , Ω ,U c . A monotonically decreasing

family of σ−algebras ( )t ΩU , t I∈ with

a) ( ) ( )t Ω ⊂ ΩU U

b) ( ) ( )1 2t tΩ ⊂ ΩU U if 1 2t t<

is called a current of σ−algebras.

Definition 2.33 : Let the s.p. ( );X t β , t I∈ , defined on the probability space

( )( )Ω , Ω ,U c , which is equipped with a current of σ−algebras ( )t ΩU , t I∈ . The s.p.

( );X t β , t I∈ is adapted to the current of σ−algebras ( )t ΩU , t I∈ iff the random variable

( );X t β is ( )t ΩU - measurable for every t I∈ .

Remark 2.34 : It follows from the definition that ( )t ΩU always contains the σ−algebra

generated by the random variables ( ) ; ,X s s tβ < .

Remark 2.35 : For every real valued s.p. ( );X t β , t I∈ , one current of σ−algebras ( )t ΩU is

the following,

( ) ( ) [ ] ( ) ( )1 ; : 0, ,Xt X s A s t Aσ −Ω = ∈ ∈U U (43)

where ( )Aσ denotes the generated σ−algebra by the set A . The inverse sign is for the

random argument A . As we can understand ( )Xt ΩU is representing a σ−algebra of the

physical events, as they have been evolved, until time t , i.e. ( )Xt ΩU characterizes a collection

of events representing the known information at time t . The above remark can be extended for the case of a vector or complex valued s.p., with appropriate change to the Borel sets ( )U .

Definition 2.36 : Let the s.p. ( );X t β , t I∈ . We assume that ( );X t β is ( )t ΩU - measurable.

The above s.p. is called a maringale iff

( ) ( )( ) ( ) ( )( ) ( ); | ; |s sE X t E X s X sβ ββ βΩ = Ω =U U (44)

for all ,s t T∈ , and s t≤ .

Remark 2.37 : Often the term martingale is referred to processes for which relation (44) holds for the family of σ−algebras generated by the processes ( );X t β itself (Remark 2.34.), what

means that the history of the process ( );X t β prior to the instant t is chosen as condition.

Example 2.38 : A Wiener s.p. ( );W t β t I∈ , is a martingale, since

Page 51: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.2. CLASSIFICATION OF STOCHASTIC PROECESSES 39

( ) ( )( ) ( ) ( ) ( ) ( )( ); | ; ; ; |s sE W t E W s W t W sβ ββ β β β⎡ ⎤Ω = + − Ω =⎣ ⎦U U

( ) ( ) ( ) ( )( ); ; | sW s E W t W sβ β β⎡ ⎤= + − Ω =⎣ ⎦ U

( ) ( ) ( )( ) ( ); ;W s E W t W s W sβ β β⎡ ⎤= + − =⎣ ⎦

since the random variable ( ) ( ); ;W t W sβ β− is independent of all events of σ−algebra

( )s ΩU .

Preposition 2.39 : The stochastic process ( );X t β , t I∈ is a square integrable martingale iff is

an orthogonal increment process. Thus, every martingale can be described by a characteristic and satisfies the Levy property.

Proof : If ( );X t β , t I∈ is a square integrable martingale then we clearly have for every ,t s I∈

( ) ( )( ); ; 0E X t X sβ β β⎡ ⎤− =⎣ ⎦

Moreover for any disjoint sets [ ]1 2,t t I⊂ and [ ]1 2,s s I⊂ we have

( ) ( ) ( ) ( )( )1 2 1 2; ; ; ; 0E X t X t X s X sβ β β β β⎡ ⎤ ⎡ ⎤− ⋅ − =⎣ ⎦ ⎣ ⎦ .

Hence ( );X t β , t I∈ is an orthogonal increment process.

If ( );X t β , t I∈ is an orthogonal increment process then the independence of increments

implies that for ,t s I∈ with s t< , we have the independence of ( )s ΩU and

( ) ( ); ;X t X sβ β− . (See Theorem 13.10 of YEH, J., Martingales and Stochastic Analysis). Thus

( ) ( ) ( )( ) ( ) ( )( ); ; | ; ; 0sE X t X s E X t X sβ ββ β β β⎡ ⎤ ⎡ ⎤− Ω = − =⎣ ⎦ ⎣ ⎦U

Then

( )( ) ( ) ( ) ( )( ) ( ) ( )( )( )

; ; ; | ; |

;s sE X t E X t X s E X s

X s

β β ββ β β β

β

⎡ ⎤= − Ω + Ω⎣ ⎦=

U U

Therefore ( );X t β is a martingale. For more details on the proof of the above theorem we refer

to PROHOROV, YU.V. & ROZANOV, YU.A., (Probability Theory. p.137) and YEH, J. (Martingales and Stochastic Analysis. p.86).

Preposition 2.40 : Let ( );X t β , t I∈ be a martingale s.p, ( )t ΩU - measurable. Then some

basic properties, it possesses, are as follows: a) Sample functions of separable martingales have no discontinuities of the second kind;

so, they have at worst jumps.

b) ( )( );E X tβ β , t I∈ is constant. So, for every 1 2,t t I∈ we have

( ) ( )( )2 1; ; 0E X t X tβ β β− = .

Page 52: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

40 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

c) If ( );Y t β is another martingale with respect to the same family ( )t ΩU , then

( ) ( )1 2; ;a X t a Y tβ β+ (where 1 2,a a ∈ ) is also a martingale, ( )t ΩU - measurable.

d) Moreover if ( )Ψ β is a ( )t ΩU - measurable random variable, then for every 1 2,t t I∈ ,

greater than t we have

( ) ( ) ( )( )2 1; ; 0E X t X tβ Ψ β β β⎡ ⎤⋅ − =⎣ ⎦

Proof : The first assertions follows from Chentsov Theorem that it will be formulated in Theorem 2.9 of Chapter 4. The next two assertions follow directly from the definition of martingales. The proof of assertion d) can be found at PROHOROV, YU.V. & ROZANOV, YU.A., (Probability Theory. p.137).

Page 53: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.2. CLASSIFICATION OF STOCHASTIC PROECESSES 41

2.2.2. Classification Based Upon Regularity In terms of statistical regularities, stochastic processes can be grouped into two classes: stationary stochastic processes and nonstationary stochastic processes. The nth probability distribution function, ( )

1 2, ,..., 1 2, ,...,nt t t nF x x x of a s.p. ( );X t β , t I∈ , is in

general a function of 1 2, , , nx x x… and 1 2, , , nt t t… . A nonstationary s.p. is one whose distribution functions depend upon the values of time parameters explicitly. The statistical behaviour of a nonstationary s.p. thus depends upon the absolute origin of time. Clearly, most stochastic processes modelling physical phenomena are nonstationary. In particular, all physical processes having a transient period or certain damping characteristics are of the nonstationary type. On the other hand, many stochastic processes occurring in nature have the property that their statistical behaviour does not vary significantly with respect to their parameters. The surface of the sea in spatial and time coordinates, noise in time in electric circuits under steady state operation have the appearance that their fluctuations as functions of time or spatial positions stay roughly the same. Because of powerful mathematical tools which exist for treating stationary s.p.'s, this class of s.p.'s is of great practical importance, and we consider it in some detail in the following sections. More exhaustive accounts of stationary stochastic processes can be found in the work of PUGACHEV, V.S. & SINITSYN , Stochastic Differential Systems. 2.2.2.a. Stationary and Wide-Sense Stationary Processes Definition 2.41. A s.p. ( );X t β , t I∈ , is said to be stationary or strictly stationary if its

collection of probability distributions stay invariant under an arbitrary translation of the time parameter, that is, for each n and for an arbitrary τ,

( ) ( )1 2 1 2, ,..., 1 2 , ,..., 1 2, ,..., , ,..., , , 1, 2, ,

n nt t t n t t t n jF x x x F x x x t I j nτ τ τ τ+ + += + ∈ = … (45)

Let 1τ t=− , in the above. We see that the probability distributions depend upon the time parameters only through their differences. In other words, the statistical properties of a stationary s.p. are independent of the absolute time origin.

We can easily see that a stationary s.p. ( );X t β , t I∈ , possesses the following important

properties for its moments, if they exist. Since the first distribution function ( )tF x is not a

function of t, we have

( )( ); const.XE X t mβ β = =

Since

( ) ( )1 2 2 1, 1 2 0, 1 2, ,t t t tF x x F x x−=

we have

( ) ( )( ) ( ); ; XXE X t X t τ C τβ β β+ =

In view of the general relation (18), it follows that

Page 54: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

42 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

( ) ( )XX XXC τ C τ= − (46)

The correlation function of a stationary stochastic process is thus an even function of τ. Properties of higher moments can also be easily derived.

Given a physical problem, it is often quite difficult to ascertain whether the stationary property holds since Eq. (45) must hold for all n. For practical purposes, we are often interested in a wider class of stationary stochastic processes.

Definition 2.42. A s.p. ( );X t β , t I∈ , is said to be wide-sense stationary if

( )( ); const.XE X t mβ β = = <∞ (47)

and

( )( )( ) ( ) ( )( ) ( )2

1 2 2 1; , ; ; XXE X t E X t X t C t tβ ββ β β<∞ ⋅ = − (48)

A wide-sense stationary s.p. is sometimes called second-order stationary, weakly stationary, or covariance stationary. It is clear that a strictly stationary process whose second moment is finite is also wide-sense stationary, but the converse is not true in general. An important exception is the Gaussian s.p. As we shall see, a Gaussian stochastic process is completely specified by its means and covariance functions. Hence, a wide-sense stationary Gaussian stochastic process is also strictly stationary.

Example 2.43. Consider a s.p. ( );X t β , t I∈ , defined by

( ) ( ); cosX t a tβ ω Φ= + (49)

where a and ω are real constants and Φ is a r.v. uniformly distributed in the interval[ ]0, 2π . It is

a simplified version of the Pierson Longuet-Higgins model discussed in Example 1.2 of Chapter 1. We easily obtain

( )( ) ( )2

0

; cos 02aE X t t d

πβ β ω φ φ

π= + =∫

( ) ( ) ( )( ) ( )( )2 21 2 1 2 1 2

1, cos cos cos2XXC t t a E t t a t tβ ω φ ω φ ω= + + = −

Equation (49) is thus wide-sense stationary. 2.2.2.b. Spectral Densities and Correlation Functions We restrict ourselves here to the study of wide-sense stationary stochastic processes and develop further useful concepts in the analysis of this class of processes. It has been noted that the covariance or correlation function of a stochastic process plays a central role. Let us first develop a number of additional properties associated with the correlation function

( ) ( ) ( )( ); ;XXC τ E X t τ X tβ β β= + ⋅

of a wide-sense stationary s.p. ( );X t β , t I∈ ,

Page 55: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.2. CLASSIFICATION OF STOCHASTIC PROECESSES 43

1) ( )XXC τ is an even function of τ, that is

( ) ( )XX XXC τ C τ= −

This has been derived in Section 5.2.1 Eq. (46). 2) ( )XXC τ always exists and is finite. Furthermore

( ) ( ) ( )( )( )2;XX XXC τ C 0 E X tβ β≤ = (50)

This result is a direct consequence of Eq. (3.19). 3) If ( )XXC τ is continuous at the origin, then it is uniformly continuous in τ. In order to

show this, let us consider the difference ( ) ( ) ( ) ( ) ( )( ); ; ;XX XXC τ+ ε C τ E X t τ X t ε X tβ β β β⎡ ⎤− = + ⋅ − −⎣ ⎦

Using the Schwarz inequality we obtain the inequality ( ) ( ) ( ) ( ) ( )

22XX XX XX XX XXC τ+ ε C τ C 0 C 0 C ε⎡ ⎤− ≤ −⎣ ⎦

Since ( )XXC τ is continuous at the origin, ( ) ( ) 0XX XXC 0 C ε− → as 0ε→ . Hence, the

difference ( ) ( )XX XXC τ+ ε C τ− tends to zero with ε uniformly in τ.

4) As indicated by Eq. (3.20), ( )XXC τ is nonnegative definite in τ.

5) Let ( )XXC τ be continuous at the origin. It can be represented in the form

( ) ( )12

i tXX XXC τ e dω Φ ω

−∞

= ∫ (51)

If ( )Φ ω is absolutely continuous, we have

( ) ( ) /XX XXS d dω = Φ ω ω

and

( ) ( )12

i tXX XXC τ e S dω ω ω

−∞

= ∫ (52)

where ( )S ω is real and nonnegative.

By virtue of Properties 2), 3), and 4) possessed by ( )XXC τ this result is a direct consequence of

the following important theorem due to Bochner.

Theorem. 2.44 : A function ( )C τ is continuous, real, and nonnegative definite if and only if, it

has the representation (51), where ( )XXΦ ω is real, non-decreasing and bounded.

Proof : For the proof we refer to DOOB, J.L., (Stochastic Processes)

We see from Eq. (52) that the functions ( )XXC τ and ( )XXS ω form a Fourier transform pair.

The relations

( ) ( )12

i tXX XXC τ e S dω ω ω

−∞

= ∫ ( ) ( )1 i tXX XXS e C τ dτωω

π

−∞

= ∫ (53)

are called the Wiener-Khintchine formulas.

Equations (53) lead to the definition of the power spectral density function of a wide-sense stationary process whose correlation function is continuous.

Page 56: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

44 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

Definition.2.45 : The power spectral density function of a wider-sense stationary s.p. ( );X t β ,

t I∈ is defined by the first equation of (53), that is,

( ) ( )1 i tXX XXS e C τ dτωω

π

−∞

= ∫

We have already noted that the power spectral density function ( )XXS ω is real and

nonnegative. Furthermore, since ( )XXC τ is an even function of τ, the Wiener-Khintchine

formulas may be written as

( ) ( ) ( )0

2 cosXX XXS C τ τ dτω ωπ

= ∫ ( ) ( ) ( )0

cosXXC τ S τ dω ω ω∞

= ∫ (54)

Hence, ( )XXS ω is also an even function of ω. When 0τ = , the second of Eqs. (54) reduces to

( ) ( )0

0XXS d Cω ω∞

=∫

For a detailed discussion for stochastic processes we refer to ATHANASSOULIS, G.A., (Stochastic Modeling and Forecasting of Ship Systems) and PUGACHEV, V.S. & SINITSYN , (Stochastic Differential Systems).

Page 57: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.2. CLASSIFICATION OF STOCHASTIC PROECESSES 45

2.2.3. Classification Based Upon Ergodic Properties Ergodicity deals with the specific question of relating statistical or ensemble averages of a stationary stochastic process to time averages of its individual sample functions. The interchangeability of ensemble and time averages has considerable appeal in practice because, when statistical averages of a stochastic process need to be computed, what is generally available is not a representative collection of sample functions but rather certain pieces of sample functions or a long single observation of one sample function. One naturally asks if certain statistical averages of a stochastic process can be determined from appropriate time averages associated with a single sample function. Let ( )x t be a sample function of a stationary s.p. ( );X t β , [ ]0,t I T∈ = . The time average of a

given function of ( )x t , ( )( )g x t , denoted by ( )( )t

Ig x t is defined by

( )( ) ( )( )/ 2

/ 2

1 Tt

IT

g x t g x t dtT

= ∫ (55)

if the limit exists. In order to define the ergodic property of a stationary stochastic process, the sample function ( )x t is allowed to range over all possible sample functions and, for a fixed T,

over all possible time translations; thus defines a random variable. Definition.2.46 : A s.p. ( );X t β , t I∈ is said to be ergodic relative to G if, for every g G∈ ,

( ) ( )( ); ;tt

g X t E g X tββ β∞ ∞

⎡ ⎤ ⎡ ⎤=⎣ ⎦ ⎣ ⎦ (56)

It is, of course, entirely possible that, given a s.p. ( );X t β , the ergodicity condition stated

above is satisfied for certain functions of ( );X t β and is not satisfied for others. In physical

applications we are primarily interested in the following:

1. Ergodicity in the Mean: ( ) ( ); ;g X t X tβ β⎡ ⎤ =⎣ ⎦ The ergodicity condition becomes

( ) ( )( ); ;tt

X t E X tββ β∞ ∞= (57)

2. Ergodicity in the Correlation Function: ( ) ( ) ( ); ; ;g X t X t X t τβ β β⎡ ⎤ = ⋅ +⎣ ⎦ . The

ergodicity condition is

( ) ( ) ( ) ( )( ) ( ); ; ; ; ,tt t

XXX t X t τ E X t X t τ R t t τββ β β β∞ ∞∞

⋅ + = ⋅ + = + (58)

For these ergodicity criteria, mathematical conditions can be derived for the purpose of verifying the ergodicity conditions. These conditions are direct consequence of the following

Theorem 2.47 [Ergodicity in the Mean] : Let ( );X t β , t I∈ be an integrable, second order,

stochastic process and ( ) ( ); ;t

X Tm T X tβ β= . We also assume that the correlation function

( )1 2,XXC t t verify the condition

Page 58: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

46 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

( )/ 2 / 2

1 2 1 22/ 2 / 2

1lim , 0T T

XXTT T

C t t dt dtT→∞

− −

=∫ ∫ (59)

Then the ergodicity condition in the mean holds, i.e.

( ) ( )( ); ;tt

X t E X tββ β∞ ∞=

Proof : For the proof we refer to ATHANASSOULIS, G.A., (Stochastic Modeling and Forecasting of Ship Systems. Lecture Notes NTUA).

Theorem 2.48 [Statistical Estimation of mean value for a stationary s.p.] : Let ( );X t β ,

t I∈ be a second order, m.s. continuous and stationary stochastic process. Then, for every 0ε> and for every 0T > , the following inequality holds,

( )( )( )

02

12

T

XXt

X XT

t C dT

X t mT

τ τε

ε

⎛ ⎞−⎜ ⎟⎝ ⎠− > ≤ ⋅∫

c (60)

Concerning the Ergodicity criterion for the correlation function of a stochastic process the following theorem holds,

Theorem 2.48 [Ergodicity in the Correlation Function] : Let ( );X t β , t I∈ be an

integrable, fourth order, stochastic process and ( ) ( ) ( )ˆ , ; ; ;t

XX TR τ T X t τ X tβ β β= + ⋅ . Let also

the correlation function ( )1 2,YYC t t τ of the stochastic process ( ) ( ) ( ), ; ; ;Y t τ X t τ X tβ β β= + ⋅ ,

verify the condition

( )/ 2 / 2

1 2 1 22/ 2 / 2

1lim , ; 0T T

YYTT T

C t t τ dt dtT→∞

− −

=∫ ∫ (61)

Then the ergodicity condition in the Correlation Function holds, i.e.

( ) ( ) ( ) ( )( ); ; ; ;tt

X t X t τ E X t X t τββ β β β∞ ∞

⋅ + = ⋅ +

Proof : Direct application of Theorem 2.47 for the s.p. ( ) ( ) ( ), ; ; ;Y t τ X t τ X tβ β β= + ⋅

Example 2.49 : Consider the stochastic process

( ) ( ) ( ); cos sin ,X t A t B t t Iβ β β= + ∈

where ( )A β and ( )B β are independent Gaussian random variables with means zero and

variances 2σ . The correlation function of ( );X t β is

( ) ( )21 2 1 2, cosXXC t t t tσ= −

It is easily shown that Eq. (57) is satisfied and thus the stochastic process is ergodic in the mean. For a detailed discussion and more conditions for ergodicity criteria we refer to ASH, R.B., (Probability and Measure Theory) and ATHANASSOULIS, G.A., (Stochastic Modeling and Forecasting of Ship Systems. Lecture Notes NTUA).

Page 59: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.2. CLASSIFICATION OF STOCHASTIC PROECESSES 47

2.2.4. Gaussian Stochastic Process A discussion of stochastic processes is incomplete without giving considerations to Gaussian stochastic processes. Classification of stochastic processes based upon regularity and memory do not make specific references to the detailed probability distributions associated with a stochastic process. Gaussian stochastic processes are characterized by a distinct probabilistic law and they are extremely useful and most often encountered in practice.

Definition 2.50 : A stochastic process ( );X t β , t I∈ is called a Gaussian stochastic process if for every finite set 1 2, , , nt t t I∈… the random vector defined by the projection

( ) ( ) ( ) ( ) ( ) ( )1 2 1, ,..., , , ,

N nt t t x x t x t x D IΠ ⎡ ⎤ = ∈⎣ ⎦i … i

is a Gaussian random vector.

Thus, the probability density function of the random vector ( ) ( )( )1 , , nx t x t… , i.e. the n− joint

probability density of the s.p. ( );X t β is given by

( )1 2, ,..., 1 2, , ,

Nt t t nf x x x =…

( ) ( ) ( )1/ 2/ 2 112 exp

2n TC Cπ

−− −⎡ ⎤⎢ ⎥= − − −⎢ ⎥⎣ ⎦

x m x m (62)

where

( )( ) ( )( ) ( )( )1 2 1 2n n

Tt t t t t tm m m E X E X E Xβ β ββ β β⎡ ⎤⎡ ⎤= =⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦… …m (63)

and ijC μ⎡ ⎤= ⎢ ⎥⎣ ⎦ is the n n× covariance matrix of ( );X t β with

( ) ( )( ), ,i jij i t j tE X t m X t mβμ β β⎡ ⎤⎡ ⎤= − −⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ (64)

Remark 2.51 : From the above equations it is clear that the probability structure of a Gaussian stochastic process is described by the moments of first and second order.

This is a typical characteristic of the Gaussian stochastic processes that makes them particularly useful for applications. Direct consequence of the above remark is that the higher order moments of every Gaussian characteristic function are expressed through the moments of first and second order. The theorem presented below, due to Isserlis 1918, give the necessary formula for the calculation of higher order moments.

Theorem 2.52 : Let the Gaussian stochastic process ( );X t β , t I∈ . Then

I. The central moments of odd order, 2 1j k= + , are identically zero, II. The central moments of even order, 2j k= , are given by the formula,

( ) ( )( ) ( )( ) ( )( )( )( ) ( ) ( )

1 21 2

1 2

1 2 3 4 2 1 2

... 1 2 1 2, ,..., ; ; ... ;

, , ,

nn

n

k k

jj jj j jXX X n t t n t

XX i i XX i i XX i i

C t t t E X t m X t m X t m

R t t R t t R t t

β β β β

⋅ ⋅⋅⋅ ≡ − − − =

=∑ …

Page 60: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

48 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

where the summation is over all the possible partitions of the 2k indices

1 2

1,1, ,1;2,2, , 2; , , , ,nj j j

n n n… … … … at pairs ( ) ( ) ( )1 2 3 4 2 1 2, , , , , ,k ki i i i i i−… . The number of terms is

( )( )

( )2 !

1 3 5 2 12 !k

kk

k= ⋅ ⋅ ⋅ ⋅ −… .

For the Gaussian stochastic processes the following two important theorems hold,

Theorem 2.53 : Let the functions ( )μ i , I → and ( ),R i i , I I× → with the last one,

satisfies the conditions

1) ( ) ( ), , ,R s t R t s= for any ,t s I∈

2) For any 1 2, , , nt t t I∈… the matrix ( ),m n n nR t t

×⎡ ⎤⎣ ⎦ is non-negative definite.

Then, there always exists a Gaussian stochastic process ( );X t β , t I∈ , with mean value

function ( ) ( )m t tμ= , t I∈ and covariance function ( ),R i i , I I× → , i.e. it satisfies the

relations

( )( ) ( ),E X t tβ β μ= , t I∈ (65)

( ) ( )( ) ( ), , ,E X t X s R t sβ β β = , ,t s I∈ (66)

If the functions ( )μ i and ( ),R i i are real then there is always a real stochastic process ( );X t β ,

t I∈ that satisfies the above relations. In this case equation (66) becomes

( ) ( )( ) ( ), , ,E X t X s R t sβ β β = , ,t s I∈ (67)

Theorem 2.54 : Let the stochastic process ( );X t β , t I∈ with finite variance

( )( )2,E X tβ β <∞ , t I∈ (68)

Then there is always exists a Gaussian stochastic process ( );Y t β , t I∈ such that

( ) ( )Y Xm t m t= , t I∈ (69)

( ) ( ), ,YY XXR t s R t s= , ,t s I∈ (70)

If the the stochastic process ( );X t β , t I∈ is real then there is always a real stochastic process

( );Y t β , t I∈ that satisfies the above relations.

The above theorems are formulated and proved at the monograph of DOOB, J.L., (Stochastic Processes).

Page 61: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

2.3. REFERENCES 49

2.3. References ASH, R.B., 1972, Measure, Integration and Functional Analysis. Academic Press. ASH, R.B., 2000, Probability and Measure Theory. Academic Press. ATHANASSOULIS, G.A., 2002, Stochastic Modeling and Forecasting of Ship Systems. Lecture

Notes NTUA. BROWN, R.G. & HWANG, P.Y.C., 1997, Introduction to Random Signals and Applied Kalman

Filtering. John Wiley &Sons. CRAMER H. , LEADBETTER M. R.,1953, Stationary and Related Stochastic Process, Wiley NY. DOOB, J.L., 1953, Stochastic Processes. John Wiley &Sons. FRISTEDT, B. & GRAY. L., 1997, A Modern Approach to Probability Theory. Birkhäuser GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1996, Theory of Random Processes. Dover Publications. GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1974, The Theory of Stochastic Processes I. Spinger. GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1975, The Theory of Stochastic Processes II. Spinger. GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1975, The Theory of Stochastic Processes III. Spinger. KLIMOV, G., 1986, Probability Theory and Mathematical Statistics. MIR Publishers. KOVALENKO, I.N. & KUZNETSOV, N.YU. & SHURENKOV, V.M., 1996, Models of Random

Processes. CRC Press. KOLMOGOROV, A.N., 1956, Foundation of the Theory of Probability. LOÈVE, M., 1977, Probability Theory II, Springer. PERCIVAL, D.B. & WALDEN, A.T., 1993, Spectral Analysis for Physical Applications. Cambridge

University Press. PROHOROV, YU.V. & ROZANOV, YU.A., 1969, Probability Theory. Springer. PUGACHEV, V.S. & SINITSYN , 1987, Stochastic Differential Systems. John Wiley &Sons. ROGERS, L.C.G. & WILLIAMS, D., 1993, Diffusions, Markov Processes and Martingales.

Cambridge Mathematical Library. SINAI, YA.G., 1976, Introduction to Ergodic Theory. Princeton University Press. SNYDER, D.L., 1975, Random Point Processes. Wiley, New York. SOBCZYK, K., 1991, Stochastic Differential Equations, Kluwer Academic Publishers. SOIZE, C., 1993, Mathematical Methods in Signal Analysis. French, Masson, Paris. SOIZE, C., 1994, The Fokker-Planck Equation and its Explicit Steady State Solutions. World

Scientific. SOONG, T.T., 1973, Random Differential Equations. Academic Press. SOONG, T.T. AND GRIGORIU, M., 1993, Random Vibration of Mechanical and Structural Systems. Prentice-Hall. SPILIOTIS, I., 2004, Stochastic Differential Equations. Simeon Press. YEH, J., 1995, Martingales and Stochastic Analysis. World Scientific.

Page 62: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

50 CHAPTER 2 STOCHASTIC PROCESSES – DEFINITION AND CLASSIFICATION

Page 63: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

51

Chapter 3 Probability Measures in Hilbert Spaces

As we have discussed at Section 1 of Chapter 2 the definition of the stochastic process as a measurable function which maps the space of elementary events Ω into the metric space [ lead us to the concept of a probability measure over an infinite dimensional space. In this chapter we will consider the case of probability measures defined on a Hilbert space. Every result presented above can be generalized to the case of a Banach space by replacing the inner product with the duality pair and making suitable changes and restrictions. In the first section we will give the definition of a stochastic process using probability measures. Moreover we will give some special examples of probability measures to get a clearer picture of the issue. Based on these examples we will introduce the finite dimensional relative measures, i.e. induced probability measures for finite dimensional subspaces of the Hilbert space. The next step will be to give the necessary and sufficient conditions for a family of probability measures, defined on every subspace of a Hilbert space, to define a probability measure. Section 2 deals with the special case of cylinder functions and connects their integral over an infinite dimensional Hilbert space, using induced probability measures for finite dimensional subspaces of the Hilbert space. In Section 3 we define the notion of the characteristic functional for probability measures over Hilbert spaces. Again, using finite dimensional relatives we make exact calculations of the characteristic functional. Finally, using nuclear operators we prove Minlos-Sazonov Theorem which is the generalization of Bochner’s Theorem for the infinite dimensional case and gives the necessary and sufficient conditions for a functional to be characteristic functional of some probability measure. In Section 4 we generalize the concepts of mean value and covariance of a random variable for probability measures defined over infinite dimensional spaces. As before, we can calculate this quantities using induced probability measures over finite subspaces. Section 5 and 6 present exact formulas for the calculation of classical moments and characteristic functions. The interesting part of this section is that we can calculate, using generalized functions the above quantities for the (time) derivatives of the stochastic process. Section 7 deals with special case of Gaussian measures and their properties. We prove that for this particular case of measure an exact formula for the characteristic functional. We also calculate the mean value and covariance operators and study their properties. In Section 8 a special case of characteristic functional is studied named the Tatarskii charactersitc functional. These functionals have the important property to represent a wide class of stochastic processes. Finally in section 9 we study the reduction of the Tatarskii characteristic functional to some very important characteristic functionals, such as the characteristic functionals due to Abel, Cauchy, and Feynman.

Page 64: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

52 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

3.1. Probability Measures in Polish Spaces and their finite-dimensional relatives

3.1.1. Stochastic Processes as Random variables defined on Polish Spaces

In what follows we will consider the case of probability measures defined on a Polish space [ , i.e. a complete metric space that has a countable dense subset. We also denote with ∗[ the dual space of linear continuous functionals on [ . We recall the definition of a stochastic process with values in [ .

Definition 1.1 : Let ( )( )Ω , Ω ,U c be a probability space, :X Ω→[ be a point

function from the sample space Ω to the Polish space [ , and ( )U [ be a Borel field

over [ . The mapping X is said to be a stochastic process, iff is ( ) ( )( )Ω ,U U [ -

measurable.

Any stochastic process induces a probability measure on the space it takes values.

Definition 1.2 : Let ( )( )Ω , Ω ,U c be a probability space, ( )( ),[ U [ a measurable

space, and :X Ω→[ a stochastic process. On the Borel field ( )U [ of the measurable

space ( )( ),[ U [ we define the induced (by :X Ω→[ ) probability measure on

( )U [ is defined by

( ) ( )( ) ( )1 , for every B X B B−= ∈[c c U [ (1)

The triplet ( )( ), , [[ U [ c is called the induced probability space.

We shall now present some simple, specific examples of probability measures on Polish spaces. These examples are given by means of specific constructions permitting appropriate extensions of a finite-dimensional probability measure on the whole Polish space.

Example 1.3 : [The 1-D induced measure]: Let c be a probability measure in , and e be a unit vector in [ . Let, also, [ ]ˆ: e IRζ → be a bijection between the span of e , (denoted by [ ]e )

and the real axis , given by ( )ˆae aζ = . Then, for any ( )E ∈U [ we define

[ ] ( ) [ ]( )( )ˆ ˆe E E eζ= ∩c c (2)

[ ]ec is a probability measure in [ since,

a) [ ] ( ) [ ]( )( )ˆ ˆ 0e E E eζ= ≥∩c c , ( )E ∈U [

b) [ ] ( ) [ ]( )( ) [ ]( )( ) ( )ˆ ˆ ˆ 1e e eζ ζ= = = =∩[ [c c c c

c) for any countable collection of mutually disjoint sets ( )1 2, ,...,E E in U [

[ ] [ ] [ ]( ) [ ] ( )ˆ ˆˆ ˆn n n ne en n n n

E E e E e Eζ ζ⎛ ⎞⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎟⎜ ⎟⎜⎟ ⎟ ⎟⎜ ⎜ ⎜⎟⎟⎜= = =⎟ ⎟ ⎟⎜⎜ ⎜ ⎜⎟⎟⎟ ⎟ ⎟⎜ ⎜⎜ ⎜ ⎜⎟ ⎟ ⎟⎟⎟⎜ ⎟⎜⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎝ ⎠⎝ ⎠

∑∪ ∪ ∩ ∪ ∩c c c c

Page 65: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.1. PROBABILITY MEASURES IN POLISH SPACES AND THEIR FINITE-DIMENSIONAL RELATIVES 53

.

Example 1.4 : [The N-D induced measure]: Let Nc be a probability measure in N , and NL

be a linear subspace of [ , of dimension N . Let 1 2, , ..., Ne e e be a basis of NL , and

[ ]1 2: , ,..., NN NL e e e IRζ = → be the bijection ( ) ( )1 1 2 2 1 2... , , ...,N N Na e a e a e a a aζ + + + = . Then,

for any ( )E ∈U [ we define

( ) ( )( )1 2, ,..., .N

N Ne e e E E Lζ⎡ ⎤⎢ ⎥⎣ ⎦= ∩c c (3)

The set function 1 2, ,..., Ne e e⎡ ⎤⎢ ⎥⎣ ⎦

c is a legitimate probability measure in [ obtained as an

extension of the N-D probability measure Nc over the whole Polish space. In what follows

we will examine the existence of a probability measure which is not an induced measure from a finite dimensional measure, but it has its own infinite dimensional structure. Before we procced we recall some basic results from probability theory.

3.1.2. Cylindric Sets To begin our discussion we first need to give some essential definitions, concerning a special class of sets. Thus we have, Definition 1.5 : A cylindric set on [ is defined to be a set

( )

1n

i ilQ B

=

⊂[ of the form

( ) ( ) ( ) ( )

11 2: , , , , , , ,n

i i

n nnl

Q B Q B x l x l x l x B B=

= = ∈ ∈ ⊂ ∈…[ Ul

for every 1,2,n = … and ( )1 2, , , nl l l= …l with il∗∈[ , 1, ,i n= … .

The class of all cylindric sets for given il∗∈[ , 1, ,i n= … is obviousy a σ−algebra which

we will write as 1

ni il =

U . The union of all σ−algebras is an algebra. Indeed, if 1B and 2B

belong to 1

U l and 2

U l , then choosing ( )1 2,=l l l we get,

a) , 1, 2iB i∈ =U l

( )( ), ,U c

[ ]e

2E

[ ]ˆ: eζ →

2e⊥

1E E

( ) [ ]( )ˆ, , e[ U [ c

3e⊥ 1e⊥

e

Page 66: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

54 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

b) 1 2B B ∈∪ U l

c) 1 2B B ∈∩ U l

d) 1 2B B− ∈U l

We denote this algebra of sets by 0U . It is called the algebra of cylinder sets. Sets from 0U

are also considered as elementary for the construction of the measure on ( )( ),U [[ as we

shall see at the sequel.

3.1.3. Restriction of a probability measure over a Polish space, to finite-dimensional

subspaces

We shall now study the restriction of a probability measure over a Polish space (assuming it exists), to finite-dimensional subspaces. Let ( )( )Ω , Ω ,U c be a probability space,

:X Ω→[ a stochastic process with values in a separable, real, Polish space [ , and ( )U [ a Borel field over [ .

Definition 1.6 : Let ( )( ), , [[ U [ c be the induced probability space form the s.p.

:X Ω→[ . Let LΠ be a projection of [ into the finite-dimensional subspace L . The

probability measure defined on ( )( ),L LU as

( ) [ ]( ) [ ] ( ) ( )1 ,:L L LB B x x B B LΠ Π Π−= = ∈ ∈ ∈[ [c c c [ U (4)

is called the projection of the measure [c on the subspace L .

Remark 1.7 : The fact that ( )L

BΠc is a measure follows form the fact that for a sequence of

non-overlapping sets ( )nB LU∈ the sets

[ ]1 1L n L n

n n

B BΠ Π− −⎡ ⎤=⎢ ⎥

⎣ ⎦∪ ∪

are also non-overlapping.

Remark 1.8 : Using a bijective mapping : nLζ → where [ ]dimn L= we can define the

above probability measure of the finite-dimensional subspace L on the borel field of n , as follows, ( ) ( )( ) ( )1

,,n LL

nB B BΠΠζ −= ∈c c U

Hence, using the projection LΠ we define a r.v. into the probability space ( )( ),,L

LL ΠU c

since the mapping defined is ( ) ( )( )Ω , nU U -measurable. We denote this r.v. as ( )LΠΞ ω .

Page 67: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.1. PROBABILITY MEASURES IN POLISH SPACES AND THEIR FINITE-DIMENSIONAL RELATIVES 55

The measures LnΠc , being projections of the same measure, are compatible in a certain sense,

for different n. This compatibility condition is the generalization of the consistency condition of Kolmogorov Theorem (Chapter 2/ Theorem 1.7).

Compatibility Condition 1.9 : Let the probability measure [c defined on the measurable

space ( )( ),[ U [ and the family of measures LΠc defined for all finite-dimensional

subspaces L of [ . Then for all 1 2L L⊂ and ( )1B LU∈ we have,

( ) [ ]( )1

12

1 2L L LB B LΠ Π Π −= ∩c c (5)

Proof : Let 1 2L L⊂ and ( )1B LU∈ . Then the set [ ]1

1L BΠ − can also be written as [ ]

2

12L BΠ − ,

where ( )2 2B LU∈ is defined by the equality

[ ] 12 2 : LB x L x BΠ= ∈ ∈

Since [ ] [ ]1 2

1 12L LB BΠ Π− −= , we have

( ) [ ]( ) [ ]( ) ( )1 2

1 12 2

1 2L LL LB B B BΠ ΠΠ Π− −= = =[ [c c c c

Since [ ] 12 2 : LB x L x BΠ= ∈ ∈ , we have prove the condition (5).

3.1.4. Conditions for the existence of probability measures over a Polish space

We will now study the inverse problem, that is to investigate the conditions under those, a family of probability measures

LnΠc , for a sequence nL of linear subspaces of [ , for

which 1n nL L +⊂ and nn

L∪ is dense in [ , can define a probability measure on [ . This

family of probability measures is redundance, so we cannot associate it with a unique probability measure of [ , unless it satisfies the Compatibility Condition 1.9. Using the

( )[ ]( )

1,, ,

ePU c

[ ]11: eζ − →

( )( ), , [[ U [ c

1e

2e

Page 68: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

56 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

arguments of Example 1.4, one can determine [c on ( )LU , by knowing LΠc . Hence

knowing LnΠc , for a sequence nL of linear subspaces of [ , we can define [c on

( )nn

L∪U and since the σ-closure of this algebra coincides with ( )U [ we can define [c

by the same token. Unfortunately, there are extra requirements for a family of probability measures, satisfying Compatibility Condition 1.9, to correspond to a probability measure on

( )( ),[ U [ . The next theorem gives a necessary and sufficient condition for a weak

distribution to correspond to some measure.

Theorem 1.10 : Let Sρ be a sphere of radius : :S x xρρ ρ= ≤ . A family of probability

measures LΠc , satisfying Compatibility Condition 1.9, will be generated by some measure

[c onto the measurable space ( )( ),[ U [ iff for every 0ε> there exists an 0η> such

that for all L

( ) 1L

S LΠ ρ ε≥ −∩c when ρ η> (6)

Proof : Necessity. If LΠc is generated by the measure [c , then choosing 0η> such that

( ) 1S[c η ε> − (this is possible since ( ) ( ) 1lim S[ [ [c cηη→+∞= = ), we obtain

( ) ( )( ) ( ) ( )1 1LLS L S L S SΠ [ [ [c c c cρ ρ ρ ηΠ ε−= ≥ ≥ > −∩ ∩ .

The proof of sufficiency is more difficult. We define on the algebra ( )0 nn

L=∪U U a finitely

additive function [c by means of

( ) ( ) 0,L

A A AΠ[c c U= ⊂ .

To convince ourselves that [c can be extended to a measure defined on ( ), [[ U , it is sufficient to show that [c is continuous on 0U , i.e., that for an arbitrary sequence of sets

0nA U∈ for which 1n nA A +⊃ and nn

A =∅∩ ,

( )lim 0nnA

→∞=[c (7)

Let nA be a cylinder set with base in nL and 1n nL L +⊂ . Let n nB L⊂ be the base of nA . We remark that it is sufficient to prove (7) merely for sets with closed bases. Indeed, choosing closed sets n nC B⊂ such that ( ) ( )

L nn n L n n nB C B CΠc c ε− = − < and then taking

1

:m

n

n L m nm

D x x C LΠ=

= ∈∩ ∩ ,

we obtain closed sets for which

( ) ( )1 1

n m

n n

L n n L m m mm m

B C B Cc c ε= =

− ≤ − ≤∑ ∑

Page 69: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.1. PROBABILITY MEASURES IN POLISH SPACES AND THEIR FINITE-DIMENSIONAL RELATIVES 57

Hence, if ( )1nn L nA DΠ−′ = , then

1n nA A+′ ′⊂ , n n

n n

A A′ =∩ ∩ and ( ) ( )1

n

n n mm

A A[ [c c ε=

′≤ +∑

If for cylinder sets with closed bases ( )lim 0nnA

→∞′ =[c , then since

1n

=∑ can be taken as

suitably small, (7) is also fulfilled. Hence, it will be assumed that the nB are closed sets. Then the nA will be weakly closed sets (if .w

kx x⎯⎯→ and k nx A∈ , then nx A∈ ). For all ρ the sets Sρ is also weakly closed and weakly compact. Since,

1n

n

S Aρ

=

⎡ ⎤⎢ ⎥ =∅⎢ ⎥⎣ ⎦

∩ ∩

1n

n

S Aρ

=

=∅∪ ∩ and since the sets nS Aρ∩ are weakly closed and weakly compact and

1n nS A S Aρ ρ +⊃∩ ∩ , for some n , nS Aρ =∅∩ . This means that,

( ) ( ) ( ) ( )n n nn L n L n L nA A L L S[c c c c ρ ε= ≤ − ≤∩ provided that ρ η> . From the arbitrariness of 0ε> there follows (7).

To understand more precisely the role of condition (6) let us note that, for an arbitrary family of measures, is equivalent with the property of “uniformly tight” described by the following

Definition 1.11 : A family c of probability measures on [ is uniformly tight if, for

every 0ε> there exists a compact subset ^ of [ such that ( )^c ε′ < for all

c c∈ .

Concerning the property of uniformly tightness the following important theorem holds,

Theorem 1.12 [Prohorov] : A family c of probability measures on a Polish space [ is

relatively sequentially compact if and only if is uniformly tight.

Hence condition (6) assures that for every sequence ( )1, 2,n nc = … of members of c ,

there exists a convergent subsequence. Now consider a family of probability measures

LnΠc , for a sequence nL of linear subspaces of [ , for which 1n nL L +⊂ and nn

L∪ is

dense in [ . Then, to assure the uniqness of the limit of every convergent subsequence we need Compatibility Condition 1.9. Indeed, if compatibility condition 1.9 holds and for two subsequences there are two different limits, it can be easily seen that we are leading to an absurd. Thus, the property of “uniformly tight” for a family of probability measures, assures the existence of a convergent subsequence, while, the compatibility condition assures the uniqness of the limit for every convergent subsequence. Finally the existence of a probability measure is assured by the following

Page 70: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

58 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

Preposition 1.13 : Let ( )1, 2,n nc = … be a relatively sequentially compact sequence of

probability measures on a Polish space such that every convergent subsequence has the same limiting probability measure c . Then nc c→ as n →∞ .

Example 1.14 : Let [ be the [ ]( ),C a b space of continuous functions on the interval [ ],a b

and ( )CU the σ− field generated by the open subsets of [ ]( ),C a b . Let

( ) [ ]( )1 2, ,..., : ,

N

Nt t t C a b IRΠ → be the projection defined by

( ) ( ) ( ) ( ) ( ) [ ]( )1 2 1, ,..., , , , ,

N nt t t x x t x t x C a bΠ ⎡ ⎤ = ∈⎣ ⎦i … i

For the moment we assume that c exists and it’s a probability measure over [ ]( ),C a b . We

thus can define the sequence of finite probability measures

( )

( ) ( ) [ ]( )( )1 2, ,...,1 2 , ,..., ,

Nt t tN t t tE E C a bΠ ⎡ ⎤= =⎢ ⎥⎣ ⎦∩c c

( ) ( )1 2

1

, ,..., 1 2 1... , ,..., ... ,N

t tN

t t t n NA A

f x x x dx dx E C= ∈∫ ∫ U

Now, c may be thought of as the limit of the function ( )1 2

1

, ,..., 1 2 1... , ,..., ...N

t tN

t t t n NA A

f x x x dx dx∫ ∫ ,

which has discrete arguments. The above limit exists and it is a probability measure if and only if the family of distributions ( )

1 2, ,..., 1 2, ,...,Nt t t nf x x x fulfils the conditions of Kolmogorov

theorem (Chapter 2/ Theorem 1.7), which are special case of the conditions presented above, for general measures.

Remark 1.15 : We note that for the definition of the probability measure [c on the

measurable space ( )( ),[ U [ it is sufficient to know the family of measures zLc

defined for all one-dimensional subspaces L of [ spanned by a vector z∈[ . Thus, the measure [c is defined by its one-dimensional distributions (i.e. by its projections onto one-dimensional subspaces). The proof of the above basically relies to the characteristic functional approach that will be studied in the next section. Using this remark we could define characteristic functionals by knowing their one-dimensional distributions. However this cannot be done in general, since the resulted measure must satisfy the definition of a probability measure, a condition which is not, in general, fulfilled.

Page 71: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.2. CYLINDER FUNCTIONS AND THEIR INTEGRAL 59

3.2. Cylinder functions and their Integral It is interesting to note that some classes of functions can also be integrated with respect to some projection of the probability measure, defined on the Hilbert space. To these functions, for example, are related the cylinder functions defined below.

Definition 2.1 : A function : V→Z [ , where V 1 is a linear space, is called a cylinder function if there is a finite dimensional subspace L⊂[ and a finite dimensional ( )LU -measurable function :Lg L V→ , such that

( ) [ ]( ) ,L Lx g x xΠ= ∈Z [ (1)

where LΠ is the projection of [ on L .

In a Hilbert space, a finite dimensional projection [ ]LΠ i can be represented as

1, , , ,nl y l y… where il are n linearly independent elements of the Hilbert space. With this

form of projection it’s possible to integrate cylinder functions of the form ( )1, , , ,ng l y l y…

which will play an important role for the sequel. Let ( )( )Ω , Ω ,U c be a probability space, :X Ω→[ a stochastic process from the

space of elementary events Ω to the separable, real, Hilbert space [ and ( )U [ a Borel field over [ . Let also, [c be the induced probability measure. For every n , and for every vector element 1 2, , , n

n= l l l ∈… [l we consider the projection : LΠ →[l ,

[ ] 1 21 2

1 2

, , , nn

n

ll lx l x l x l x xl l l

Π = + + + ∈… [l (2)

and the injection : nLζ → defined by,

1 21

, , ,n

i i ni

a l a a aζ=

⎛ ⎞⎟⎜ =⎟⎜ ⎟⎜ ⎟⎝ ⎠∑ … (3)

Based on the above discussion we have the restriction of the probability measure [c onto n ,

( ) ( ) ( )1, , , ,: nnE x l y l y E E= ∈ ∈ ∈…[c c [ Ul (4)

or

( ) ( ) ( )1, , , , 1, ,: nn i iF F a a x l x a i n= = ∈ ≤ = ∈… …[c [l la a (5)

We then have the following

Definition 2.2 : Let : ng → be an arbitrary integrable, measurable function, defined in n , with values in . For every n , and for every vector 1 2, , , n

n= l l l ∈… [l with 0il ≠ for

1 In our applications V is usually or .

Page 72: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

60 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

every 1, ,i n= … , we define the infinite dimensional integral of the function : ng → by the following relation

( ) ( ) ( ) ( )1 1 1, , , , , , , ,n

n n ng l x l x dx g t t dt dt=∫ ∫… … …[

[

c cl (6)

where, cl is defined by eq. (4).

Remark 2.3 : The above definition physically connects the integral over an infinite dimensional space with a finite dimensional subspace. To understand more precisely the above connection suppose that in some way we could define the infinte dimensional integral in the l.h.s. of equation (6). Then we would have [ ] 1, , , ,nx l x l xΠ = …l . Making the set valued

transformation

[ ]1 : nx Π−= →[l t

we have,

[ ]( ) ( ) ( ) [ ]( )11 2 1 2, , ,

n

n ng P x dx g t t t P dt dt dt−=∫ ∫ … …[ [

[

c cl l

Using the definition of eq. (4) we see that

[ ]( ) ( ) ( ) ( )11, , , ,: n

nE x l x l x E E EΠ − = ∈ ∈ = ∈…[ [c c [ c Ul l .

Thus, the reduction of the infinite dimensional integral is due to the fact that the integrable function : ng → depends from a finite dimensional subspace of [ .

Remark 2.4 : As was illustrated above, one of the most important applications of the projection of a measure is the calculation of integrals of cylinder functions. These finite-dimensional measures are adapted to the form of the integrable cylinder function, using a suitable projection. The above current of ‘information’ can be summarized to the diagram below,

Remark 2.5 : For 1n = the above definition reduce to the equation

( ) ( ) ( ) ( ), lg l x dx g t dt l= ∈∫ ∫[

c c [ (7)

where

( ) ; , for every ,l a x l x a l a= ∈ ≤ ∈ ∈c c [ [ (8)

The above equation will be used extensively in what follows.

( )( )Ω , Ω ,U c

( )Ξ

⎯⎯⎯⎯⎯→i

( )( ) ( )( )1 , , , ,, , , ,nl l n n⎯⎯⎯⎯⎯→i … i[[ U [ c U c l

( )X

←⎯⎯⎯⎯⎯

i

Page 73: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.3. CHARACTERISTIC FUNCTIONAL OF THE PROBABILITY MEASURE 61

3.3. Characteristic Functional of the Probability Measure

In this section we will generalize the concept of the characteristic function for probability measures on real, separable, Hilbert spaces. First we recall the definition of a positive definite functional.

Definition 3.1 : A function φ from [ into (or ) is called a functional of positive type if for any 1 2, , ..., , 1, 2, ...nx x x n∈ =[ and any numbers 1 2, ,..., nc c c ∈ we have:

( ), 1

0n

j k j kj k

x x c cφ=

− ⋅ ⋅ ≥∑ (1)

We are, now, going to define the characteristic functional for a probability measure. Let [ be a real, separable Hilbert space and ( )U [ the σ− field generated by the open subsets of

[ . Thus, we have the following

Definition 3.2 : The characteristic functional : →Y [ of a probability measure [c on [ , is defined by

( ) ( ) ( ), ,i x y i x yx E e e dy xβ= = ∈∫[

Y c [ (2)

Remark 3.3 : Since ( )<∞c [ , Y always exists. Moreover,

( ) ( ) 1x x≤ = ∀ ∈Y c [ [ (3)

Remark 3.4 : Using the distribution of the measurable function ,x i and the remark 2.5. we

have that

( ) ( ) ( ),i x y itxx e dy e dt x

−∞

= = ∈∫ ∫[

Y c c [ (4)

Hence, this is how the probability measure can be defined, by using the family of measures xLc defined for all one-dimensional subspaces L of [ spanned by a vector x∈[ .

(See Remark 1.12).

Concerning the smoothness of Y we have the following theorem.

Theorem. 3.5 : Let c be a probability measure in [ and let Y be the characteristic functional of c . Then Y is uniformly continuous.

Proof : Let 0ε> be given. Then there exists 0r > such that ( ) 14rS ε

> −c , where

:rS x x r= ≤ . Then

( ) ( ) ( ), ,i x z i y zx y e e dz− ≤ − =∫[

Y Y c

Page 74: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

62 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

( ) ( ), , , ,

r r

i x z i y z i x z i y z

S S

e e dz e e dz′

= − + − ≤∫ ∫c c

( ), , 24

rS

x z y z dz ε≤ − + ⋅ ≤∫ c

2r x y ε

≤ − + .

Choose 0δ> such that 2rε

δ< . Then, x y δ− < implies ( ) ( )x y ε− <Y Y . Hence, is

uniformly continuous.

If [ is an infinite dimensional Hilbert space we can show something more for Y . For the sake of convenience we introduce the following

Definition 3.6 : An operator is called a nuclear operator of [ if

1. it has finite trace, i.e. , for some orthonormal basis ie , 1

,i ii

Se e∞

=

<∞∑ ,

2. it is self – adjoint ,

3. it is positive – definite. S denotes the nuclear operators of [ .

This theorem gives necessary and sufficient conditions for a functional Y be the characteristic functional of some measure on ( )( ),[ U [ . It is thus a generalization of the

Bochner’s theorem on positive definite functions in Hilbert space.

Theorem. 3.7 [Minlos-Sazonov] : A functional Y in [ is the characteristic functional of some probability measure in ( )U [ if and only if

a) ( )0 1=Y and Y is a functional of positive type, b) For every 0ε> , exists Cε ∈ S such that

( )1 Re , for all x C x x xε ε− ≤ + ∈Y [ . (5)

Remark. 3.8 : b) implies, in particular, that when ,C x xε is small, then ( )1 Re x− Y is

small. Note that ,C x xε can be small even if x is big. Note also that ,C x x aε < is an

ellipsoid with semi-axes ( ) ( )1 12 2

1 2/ , / , ... ,a aλ λ where nλ ’s are the eigenvalues of Cε .

Proof : Necessity. Let ( ) ( ),i x yx e dy= ∫[

Y c and 0ε> be given. Choose 0 r< <∞ such

that ( ) 1 2rS ε> −c where rS is the ball x r≤ . Then

( ) ( ) ( ), ,

r r

i x y i x y

S S

x e dy e dy′

= +∫ ∫Y c c

Page 75: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.3. CHARACTERISTIC FUNCTIONAL OF THE PROBABILITY MEASURE 63

Note that

( ) ( ),

2r

i x yr

S

e dy S ε′

′≤ <∫ c c

Hence it is sufficient to show that

( ),1 Re ,2

r

i x y

S

e dy C x xεε

− ≤ +∫ c

for some operator Cε ∈ S . But

( ) ( ) ( ),1 Re 1 cos ,r r

i x yr

S S

e dy x y dy S ′⎡ ⎤− = − + ≤⎣ ⎦∫ ∫c c c

( )1 cos ,2

rS

x y dy ε⎡ ⎤≤ − +⎣ ⎦∫ c

Recall that 211 cos2

θ θ− ≤ for all real θ . Hence

( ) ( )211 cos , ,2

r rS S

x y dy x y dy⎡ ⎤− ≤⎣ ⎦∫ ∫c c

The same argument in the proof of Theorem 4.10 below shows that there exists Cε ∈ S such that

( )1, , ,2

rS

C x x x z y z dzε = ∫ c

The desired conclusions follow immediately.

Remark : Observe that if the covariance operator Cc of c exists, then we have

( ) 11 Re , for all 2x C x x x− ≤ ∈cY [

But Cc , in general, is not an S - operator. Of course, if C ∈ Sc , then b) is trivially satisfied

by taking 12

C Cε = c for all 0ε> .

Remark : Observe also that Cε is the covariance operator of the following Borel measure c in [ ,

( ) ( ) ( )1 ,2 rE E S E= ∈∩ [c c U

Clearly c satisfies the hypothesis of Theorem 4.10 and thus its covariance operator Cε is an S - operator. This is another proof for Cε ∈ S .

Sufficiency. (We will need to use Bochner’s Theorem for the finite dimensional space n )

Step 1. We first derived some properties implied by a).

Page 76: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

64 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

a1) ( ) 1x ≤Y , and ( ) ( )x x= −Y Y for all x∈[ ,

a2) ( ) ( ) ( )2 1x y x y− ≤ − −Y Y Y for all ,x y ∈[ ,

a3) ( ) ( )1 2 1 Rex x− ≤ −Y Y for all x∈[ .

a1) : Take 2n = , 1 0x = and 2x x= . Part a) implies that the following matrix is positive definite in the sense of linear algebra,

( )( )1

1x

x

⎛ ⎞⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ −⎝ ⎠

YY

Hence ( ) ( )x x= −Y Y . Moreover the matrix determinant is non-negative, i.e.

( ) ( )1 0x x ≥−Y Y . Hence ( ) 1x ≤Y .

a2) : Take 3n = , 1 0x = , 2x x= and 3x y= . Part a) implies that the following matrix

( ) ( )( ) ( )( ) ( )

( ) ( )( ) ( )( ) ( )

111 1

1 1

x yx yx y x x x yy x y y x y

⎛ ⎞⎛ ⎞ ⎟⎜⎟⎜ ⎟⎜⎟ ⎟⎜ ⎜⎟ ⎟⎜ ⎜⎟ ⎟− − = −⎜ ⎜⎟ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎟⎟ ⎜⎜ − −⎝ ⎠ ⎟⎜ −⎝ ⎠

Y YY YY Y Y YY Y Y Y

is positive definite. Hence the matrix determinant 0D ≥ . But

( ) ( ) ( ) ( ) ( ) ( ) ( )

( ) ( )

2

2 2

1D x y x y x y x y y

x x y

= + − + − − −

− − =−

Y Y Y Y Y Y Y

Y Y

( ) ( ) ( ) ( ) ( )( ) ( )2 2 2

1 2Re x y x y x y x y⎡ ⎤= + − − + − − =⎢ ⎥⎣ ⎦Y Y Y Y Y Y

( ) ( ) ( ) ( ) ( ) ( )

( ) ( )

2 21 2Re

2Re

x y x y x y x y

x y

⎡ ⎤= + − − − − − −⎢ ⎥⎣ ⎦⎡ ⎤− =⎢ ⎥⎣ ⎦

Y Y Y Y Y Y

Y Y

( ) ( ) ( )( ) ( ) ( ) ( )2 2

1 2Re 1x y x y x y x y⎡ ⎤= + − − − − − −⎢ ⎥⎣ ⎦Y Y Y Y Y Y

Note that ( ) ( )( ) ( )( ) ( )

21 1 1 2 1x y x y x y x y− − = − − + − ≤ − −Y Y Y Y

and

( ) ( ) ( )( ) ( ) ( ) ( )2Re 1 2 1x y x y x y x y⎡ ⎤− − ≤ − − ≤⎢ ⎥⎣ ⎦Y Y Y Y Y Y

( )2 1 x y≤ − −Y

i.e. ( ) ( ) ( )2 1x y x y− ≤ − −Y Y Y

a3) : Note that if 1z ≤ then

Page 77: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.3. CHARACTERISTIC FUNCTIONAL OF THE PROBABILITY MEASURE 65

( ) ( )( ) ( ) ( )221 1 1 1 1 2 Re 1 2 1 Rez z z z z z z z− = − − = − + + ≤ − + = −

From a1) we know that ( ) 1x ≤Y , therefore,

( ) ( )( )21 2 1 Rex x− ≤ −Y Y

Hence,

( ) ( )1 2 1 Rex x− ≤ −Y Y .

Remark : Property a2) says that if ( )xY is continuous at the origin (w.r.t. whatever topology)

then ( )xY is continuous in the whole [ (w.r.t. the same topology). Property a3) says that

the continuity of ( )Re xY impies the continuity of ( )xY . Compare b) of this Theorem.

Step 2. Let ne be a fixed orthonormal basis of [ . For each 1n≥ , define ( )1 , , ne e x…Y in

n by ( ) ( )

1 , , 1 1 1, ,ne e n n na a a e a e= + +… … …Y Y

Note that ReY is continuous at the origin by b). Hence Y is continuous in [ by a2) and a3). Hence

1, , ne e…Y is positive definite and ( )1 , , 0, ,0 1

ne e =… …Y . Bochner’s Theorem for n

gives us a family of probability measures nc such that

( ) ( )1 , , 1, ,

nn

ie e n na a e d⋅= ∫… … a y yY c , ( )1, , na a= …a

It is easy to see that nc is a consistent family. Hence Kolmogorov’s Theorem implies the

existence of a probability space ( ), ΩΩ c and a sequence of random variables nX such that

( ) 11 2, , , , 1, 2,n nX X X n−

Ω= =… …c c

Therefore,

( ) ( )1 1 2 2n

in n na e a e a e e d⋅+ + + = =∫… a y yY c , ( )( )1, , na a= …a

( )1 1 2 2 n ni a X a X a Xe d+ + +

Ω

= ∫ … c .

Step 3. Suppose we can show that 2

1n

n

X∞

=

<∞∑ almost surely (this will be shown in Step 4)

then we are done. To see this define,

( ) ( )1

,n nn

X X eω ω ω∞

=

= ∈Ω∑

Then X is measurable from Ω into [ . Define 1X −Ω=c c . c is a probability Borel

measure of [ . Let nΠ be the orthogonal projection of [ into the span of 1 2, , , ne e e… ; i.e.

1 1 2 2, , ,n n nx x e e x e e x e eΠ = + + +… , x∈[ .

Page 78: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

66 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

Then

1

n

n k kk

X X eΠ=

=∑

By Step 2 we have,

( ) , ni x Xn x e dΠΠ

Ω

= ∫Y c .

Now letting n →∞ and observing n x xΠ → in [ as n →∞ , we have ( )n xΠY by the

continuity of Y . Apply Lebesgue’s Dominated Convergence Theorem,

, ,ni x X i x Xe d e dΠ

Ω Ω

→∫ ∫c c as n →∞

Hence we have,

( ) ( ), , ,i x X i x yx e d e dy xΩ

= ∈=∫ ∫[

Y c c [ .

Step 4. To show that 2

1n

n

X∞

=

<∞∑ almost surely.

First note that

( ) ( ) ( )2 2 2 2 2 21 2 1 21 1 2 2

1 12 2

1 212

n nn n

n

ny y y a a ai a y a y a y

ne e dy dy dy eπ

− + + + − + + ++ + + ⎛ ⎞⎟⎜ =⎟⎜ ⎟⎟⎜⎝ ⎠∫… …… …

Therefore

( )2 2 21 2

12 k k k nX X X

e d+ + +− + + +

Ω

=∫…

c

21 2

1 1

1 1exp exp22n

n n n

k j j j nj j

i X Y Y dy dy dy dπ +

= =Ω

⎡ ⎤⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎟ ⎟⎜ ⎜⎢ ⎥⎟⎜ ⎟ ⎟= −⎜ ⎜⎟⎜ ⎟ ⎟⎢ ⎥⎟ ⎜ ⎜⎟⎜ ⎟ ⎟⎜ ⎜⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎢ ⎥⎣ ⎦=∑ ∑∫ ∫ … c

21 2

1 1

1 1exp exp22n

nn n

k j j j nj j

i X Y d Y dy dy dyπ+

= =Ω

⎡ ⎤⎛ ⎞ ⎛ ⎞⎛ ⎞⎟ ⎟⎜ ⎜⎢ ⎥ ⎟⎜⎟ ⎟− =⎜ ⎜⎟⎜⎟ ⎟⎢ ⎥ ⎟⎜ ⎜⎟⎜⎟ ⎟⎜ ⎜⎝ ⎠⎝ ⎠ ⎝ ⎠⎢ ⎥⎣ ⎦= ∑ ∑∫ ∫ …c

( ) 21 1 2 2 1 2

1

1 1exp22n

n n

k k n k n j nj

y e y e y e y dy dy dyπ+ + +

=

⎛ ⎞⎛ ⎞ ⎟⎜⎟⎜ ⎟+ + + − =⎜⎟⎜ ⎟⎟ ⎜⎟⎜ ⎟⎜⎝ ⎠ ⎝ ⎠= ∑∫ … …Y

( ) 21 1 2 2 1 2

1

1 1exp ,22

Ren

n n

k k n k n j nj

y e y e y e y dy dy dyπ+ + +

=

⎛ ⎞⎛ ⎞ ⎟⎜⎟⎜⎡ ⎤ ⎟+ + + −⎜⎟⎜ ⎟⎢ ⎥ ⎟ ⎜⎣ ⎦ ⎟⎜ ⎟⎜⎝ ⎠ ⎝ ⎠= ∑∫ … …Y

since the left hand side is real.

Now let 0ε> be given, we use assumption b) to get Cε ∈ S such that ( )1 Re , for all x C x x xε ε− ≤ + ∈Y [ .

Therefore,

Page 79: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.3. CHARACTERISTIC FUNCTIONAL OF THE PROBABILITY MEASURE 67

( )

( )( ) ( )

2 2 21 2

12

1 1 2 2 1

1

1 Re ,

k k k n

n

X X X

k k n k n

e d

y e y e y e p d

+ + +− + + +

Ω

+ + +

⎡ ⎤− + + +⎢ ⎥⎣ ⎦

… y

c

Y

where

( ) ( )2 2 21 1 2 1 2

1 1exp22

n

n np d y y y dy dy dyπ

⎛ ⎞ ⎛ ⎞⎟ ⎟⎜ ⎜= − + + +⎟ ⎟⎜ ⎜⎟ ⎟⎜⎟⎜ ⎝ ⎠⎝ ⎠… …y

is a probability measure in n . Then we have,

( )( ) ( )1 1 2 2 11 Ren

k k n k ny e y e y e p d+ + +⎡ ⎤− + + + ≤⎢ ⎥⎣ ⎦∫ … yY

( ) ( )1 1 2 2 1 1 2 2 1,n

k k n k n k k n k nC y e y e y e y e y e y e p dε ε+ + + + + +⎡ ⎤≤ + + + + + + + =⎢ ⎥⎣ ⎦∫ … … y

( )11 1

,n

n n

i j k i k ji j

y y C e e p dεε + += =

= + =∑∑∫ y

( )11 1

,n

n n

k i k j i ji j

C e e y y p dεε + += =

= + =∑∑ ∫ y

1

,n

k j k jj

C e eεε + +=

= +∑ .

Let n →∞ , apply Lebesgue’s Dominated Convergence Theorem to the left hand side and remember that Cε ∈ S . Hence,

2

1

12

1

1 ,k j

j

X

m mm k

e d C e eεε∞

+=

− ∞

= +Ω

∑− ≤≤ + ∑∫ c

2ε≤ whenever 0k k≥ say.

Hence,

2

1

12 1 2

k jj

X

e d ε

+=

Ω

∑≥ −∫ c , 0k k≥ .

Finally,

2

1

122

1

1 2k j

j

X

nn

X e d ε

+=

−∞

= Ω

∑⎧ ⎫⎪ ⎪⎪ ⎪<∞ ≥ ≥ −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑ ∫c c

Thus,

2 21 1 2nX X ε+ + + <∞ ≥ −… …c , for any 0ε> .

Of course we must have

2 21 1nX X+ + + <∞ =… …c ,

which is what we wanted to prove.

Page 80: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

68 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

Finally we present some basic properties concerning operations with characteristic functionals

Theorem 3.9 : a) If ( )1 xY and ( )2 xY are characteristic functionals and 1 2 1p p+ = , where

1 2,p p are non-negative numbers, then ( ) ( )1 1 2 2p x p x+Y Y is also a characteristic functional.

b) If ( )1 xY and ( )2 xY are characteristic functionals, then ( ) ( )1 2x x⋅Y Y is also a

characteristic functional. c) If ( )s xY is a family of characteristic functionals depending upon a random parameter s ,

and ( )F s is the distribution of s , then

( ) ( )sS

x dF s∫ Y

is also a characteristic functional. d) Let ( )X xY be a characteristic functional of the process ( );X t β and ( )Y xY a characteristic

functional of another process ( );Y t β being a linear transformation of ( );X t β .

d1) If ( ) ( ) ( ); ;Y t X t a tβ β= + , where ( )a t a non-random function, then

( ) ( ) ( ) ( )expY XT

x x i a t x t dt⎧ ⎫⎪ ⎪⎪ ⎪= ⋅ ⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∫Y Y

d2) If LA is an integral linear operator of the form

( ) ( ) ( ) ( ); ; , ;LT

Y t A X t G s t X s dsβ β β= = ∫

where ( ),G s t is a suitable non-random function, then

( ) ( ) ( ) ( ),Y X L XT

x A x G t s x s ds∗⎛ ⎞⎟⎜ ⎟⎜= = ⎟⎜ ⎟⎟⎜⎝ ⎠∫Y Y Y

where LA ∗ is a conjugate to LA .

Proof : Properties a) and c) follows from the corresponding properties of the probability measures and the continuity properties of the infinite dimensional Fourier transform. Property b) is a direct consequence of the linearity property of the Stochastic Processes space, since ( ) ( )1 2x x⋅Y Y is the characteristic functional of the sum ( ) ( )1 2; ;X t X tβ β+ with

characteristic functional ( ) ( )1 2,x xY Y respectively.

Finally, properties d) can be proven immediately using the definition of the characteristic functional.

Remark 3.10 : The last group of properties are very essential tools for the combination of different characteristic functionals, to create new ones, more efficient for solving Functional Differential Equations. As we shall see at Chapter 5 the simplest application of those properties is the convex superposition of Gaussian Functionals to make a representation with local characteristics (kernel characteristic functionals).

Page 81: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.4. MEAN AND COVARIANCE OPERATOR FOR PROBABILITY MEASURES 69

3.4. Mean and Covariance Operator for Probability Measures

Now we are going to generalize the concept of the covariance functions for a probability measure defined on a real, separable, Hilbert space [ .

Definition 4.1 : Let c be a probability measure in [ . The mean value of c is an element m ∈c [ such that

( ), ,m x z x dz x= ∈∫c

[

c [ (1)

Remark 4.2 : In general mc does not exist. However, if ( )x dx∫[

c then mc exists and

( )m x dx≤ ∫c

[

c (2)

Remark 4.3 : Using the distribution of the measurable function ,x i

( ) ; ,x a y x y a= ∈ ≤[c c

We have that

( ) ( ), , xm x z x dz t dt x∞

−∞

= = ∈∫ ∫c

[

c c [ (3)

Analogously we define the covariance operator.

Definition 4.4 : Let c be a probability measure in [ . The covariance operator Cc of

c is defined by

( ), , , ,x y x z y z dz x yC = ∈∫[

c c [ (4)

Remark 4.5 : The operator Cc may not exist. If Cc exists, it is positive definite and self – adjoint.

Remark 4.6 : Using the distribution of the measurable function ,x i

( ) ; ,x a y x y a= ∈ ≤c c [ (5)

We have that

( ) ( )2 2, , ,xx x x z dz t dt x yC∞

−∞

= = ∈∫ ∫[

c c c [ (6)

Example 4.7 : The covariance operator of 0xδ is

0 0 0,x

S x x x xδ = since

0 0 0 0 0, , , , ,

xC x y x x y x x x x yδ = = (7)

Example 4.8 : The covariance operator of ec (example 1.4.) is

Page 82: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

70 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

( )2 ˆ ˆ,e

IR

x t dt x e eC⎛ ⎞⎟⎜ ⎟⎜= ⎟⎜ ⎟⎟⎜⎝ ⎠∫c c (8)

since,

( ) ( )[ ]ˆ

, , , , ,e e e

e

C x y x z y z dz x z y z dz= = =∫ ∫c

[

c c

( )[ ]

( )2 2

ˆ

ˆ ˆ ˆ ˆ ˆ, , , , ,e

e IR

x e y e z e dz x e y e t dt= =∫ ∫c c

Remark 4.9 : Suppose that ( )2 1IR

t dt⎛ ⎞⎟⎜ ⎟⎜ =⎟⎜ ⎟⎟⎜⎝ ⎠∫ c . Then

0e xC Cδ=c even thought ec and

0xδ are

two different measures. Hence ec is not uniquely determined by its covariance operator. However, we will see later on that a Gaussian measure of mean zero is completely determined by its covariant operator.

The following theorem gives us all the necessary condition for an operator to be the covariance operator of a probability measure.

Theorem 4.10 : Let c be a probability measure in [ and Cc the covariance operator of

c . Then ( )2x dx < ∞∫[

c if and only if C ∈c S . In fact, ( )2trace x dxC⎡ ⎤ =⎣ ⎦ ∫[

c c .

Proof : Sufficiency. Let ne be an orthogonal basis of [ . By Monotone Convergence

Theorem we have,

( ) ( )2 2 2 21 2lim , , , nn

x dx x e x e x e dx→∞

⎡ ⎤= + +⎢ ⎥⎣ ⎦∫ ∫ …[ [

c c

But

( )2

, ,j j jx e dx e eC=∫[

cc

Hence,

( )

[ ]

2

1 1

lim , ,

trace

n

j j j jnj j

x dx C e e C e e

C

→∞= =

= = =

= <∞

∑ ∑∫ c c

[

c

c

Necessity. First of all, we have to show the existence of the covariance operator. Clearly,

2, ,x z y z x y z⋅ ≤ ⋅ ⋅

Hence,

Page 83: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.4. MEAN AND COVARIANCE OPERATOR FOR PROBABILITY MEASURES 71

( ) ( )2, ,x z y z dz x y z dz⋅ ≤ ⋅ ⋅∫ ∫[ [

c c

Therefore the bilinear form ( ), ,x z y z dz⋅∫[

c is continuous. Hence there exists a linear

operator Cc in [ such that,

( ), , ,x y x z y z dzC = ⋅∫[

c c

Obviously, Cc is self-adjoint and positive definite. To show that C ∈c S , it is sufficient to

show that if ne is an orthonormal basis of [ then the series 1

,j jj

C e e∞

=∑ c is convergent.

But

( )2

1 1

, ,j j jj j

C e e x e dx∞ ∞

= =

= =∑ ∑∫c

[

c

( )2

1

, jj

x e dx∞

=

= =∑∫[

c (Monotone Convergence Theorem)

( )2x dx= ∫[

c .

Hence we have not only shown that C ∈c S , but also proven that

( )2trace x dxC⎡ ⎤ =⎣ ⎦ ∫[

c c

It’s possible for a probability measure to have a well defined correlation operator, while the mean value can not be defined. The next example illustrates the above,

Example 4.11 : Let c be a probability measure in such that ( )2xt dt

−∞

< ∞∫ c and

( )xt dt∞

−∞

= ∞∫ c . Then Cc exists, but mc doesn’t exist.

Page 84: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

72 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

3.5. Moments from the Characteristic Functional

In the theory of random variables we have seen that the characteristic function includes all the information of the stochastic process. As a result we can have all the moments of the random variable from the characteristic function. In this section we will see how we can take analogous results for the characteristic functional. Let Y be the characteristic functional of the probability measure c defined on [ . Let, also 2∈Y V (i.e. the space of twice differentiable functionals). Calculating the first Frechet derivative we have

( )[ ]

( )( )

( )

,

0 0

,,

i x t z y

t t

i x y

d x t z dx z e dydt dt

i z y e dy

δ +

= =

⎛ ⎞+ ⎟⎜ ⎟⎜= = =⎟⎜ ⎟⎟⎜⎝ ⎠

= ⋅

∫[

[

YY c

c (1)

Thus, we have,

( )[ ] ( )0 , ,z i z y dy i m zδ = ⋅ = ⋅ ⇔∫ c

[

Y c

( )[ ], 0m z i z zδ= − ⋅ ∈c Y [ (2)

Getting the second Frechet derivative we have,

( )[ ]( )[ ]

( ),2

0 0

, , i x t w y

t t

d x t w z dx z w i z y e dydt dt

δδ +

= =

⎛ ⎞+ ⎟⎜ ⎟⎜= = =⎟⎜ ⎟⎟⎜⎝ ⎠∫[

YY c

( ),, , i x yw y z y e dy=−∫[

c (3)

Thus, we have,

( )[ ] ( )2 0 , , , ,z w w y z y dy C z wδ =− =−∫ c

[

Y c

( )[ ]2, 0 , ,C z w z w z wδ=− ∈c Y [ (4)

Similarly, we can get greater order quantities, depending on the smoothness of the functional.

Another very useful property of the characteristic functional is the ability to get moments, in the usual sense, for specific values of the time or spatial variable of the stochastic process. Let [ be the space of ( )1 ,NC and ,i i the corresponding inner product. Then, following

the notation of Section 2.1.2/Chapter 2 we have

( ) ( ) ( ) ( ); X X tE X t m t x P dxβ β∞

−∞

⎡ ⎤ = = ⋅⎣ ⎦ ∫

But from equation (2) we have

( ) ( )[ ], , 0z y dy m z i z zδ= = − ⋅ ∈∫ c

[

Yc [

Using Remark 2.5 we have

Page 85: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.5. MOMENTS FROM THE CHARACTERISTIC FUNCTIONAL 73

( ) ( )[ ]0zt dt i z zδ∞

−∞

= − ⋅ ∈∫ c Y [ (5)

where

( ) ; ,z a y z y a= ∈ ≤c c [

Setting ( )0z t tδ= − we have

( ) ( ) ( ) ( ) ( )0 0;t t X ta y y t a P aδ − = ∈ ≤ =c c [ (6)

Thus,

( ) ( ) ( )( )

( )0 00

; 0 0E X t i t t ix t

β δβ δ δ

δ⎡ ⎤ ⎡ ⎤= − ⋅ − = − ⋅⎣ ⎦ ⎣ ⎦

YY (7)

where ( )

( )x tδδ

iY denotes the Volterra derivative of the functional ( )iY at the point t .

Remark 5.1 : Hence, we see that the mean value element is of the same kind as the Hilbert space. For a space of functions, the mean value element is also a function and the classical moments are integrals with respect to measures defined by the probability of the values at an instant time.

Example 5.2 : Consider, for example the space of continuous functions defined on the time interval [ ]0,T , bounded above and below from the functions ( )Uc t and ( )Dc t , i.e. the space of

functions

[ ]( ) ( ) ( ) ( ) [ ] 0, : , 0,D Uu C T c t u t c t t T∈ ≤ ≤ ∀ ∈=h

Then the mean value element ( )m t will be a function into the space h , as illustrated to the

figure below. The mean value at time instants it will be the value of the mean element, ( )im t , or equivalently the mean value of the distribution connected with the values along the line iε .

For the general case we have

( ) ( ) ( ) ( ) ( ) ( )1 2 1 2

1 2 11 2 , ,; ; ... ; M M

M M

n n n n n nM t t t X t X tE X t X t X t x x x P dβ β β β

−∞

⎡ ⎤⋅ ⋅ ⋅ = ⋅ ⋅ ⋅⎢ ⎥⎣ ⎦ ∫ …… x

( )x t

t 2t 3t 1t T

0

( )m t

( )Uc t

( )Dc t

1ε 2ε 3ε

h

Page 86: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

74 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

where ( ) ( )1 , , MX t X tP … is the joint probability distribution for the time instants 1 2, , , Mt t t… .

Using Defintion 2.2 we will have

( ) ( ) ( ) ( ) ( ) ( ) 1 1 1 2 2, , ; , ,

M M Mt t t t a y y t a y t a y t aδ δ− − = ∈ ≤ ≤ ≤… …c c [

( ) ( ) ( )1 , , MX t X tP a= … , ( )1 2, , M

Ma a a= ∈…a

Using same arguments, as before we can show that

( ) ( ) ( )

( ) ( ) ( ) ( ) ( )

( )( ) ( ) ( )

1 2

1

1

1

1

1 21

1 2

1 1

1 2

; ; ... ;

1 0 , , ,..., , ,

01

M

M

M

M

M

MM

n n nM

n nM Mn n

n n

n n

n n nn nM

E X t X t X t

t t t t t t t ti

i x t x t x t

β β β β

δ δ δ δ δ

δ

δ δ δ

+ ++ +

+ +

+ +

⎡ ⎤⋅ ⋅ ⋅ =⎢ ⎥⎣ ⎦⎡ ⎤⎢ ⎥= − − − − =⎢ ⎥⎢ ⎥⎣ ⎦

=⋅ ⋅ ⋅

……

… …

Y

Y

(8)

Again we have two consistent interpretations of the moments ( ) ( )1

1 ; ... ; Mn nME X t X tβ β β⎡ ⎤⋅ ⋅⎢ ⎥⎣ ⎦

as

in Example 5.2. More specifically, the above moment can be interpreted as the value of the corresponding operator at 1 2, , , Mt t t… or the moment of the joint distribution connected with the values along the lines 1 2, , Mε ε ε… . Another important property of the characterization of a stochastic process through the characteristic functional or the probability measure is the possibility to get moments for the time derivatives of the stochastic process. From equation (5) we have

( ) ( )[ ]0zt dt i z zδ∞

−∞

= − ⋅ ∈∫ c Y [

where

( ) ; ,z a y z y a= ∈ ≤c c [

Setting ( )0z t tδ′= − we have

( ) ( ) ( ) ( ) ( )0 0;t t X ta y y t a P aδ ′ ′−

′= ∈ ≤ =c c [ (9)

Thus ( ) ( ) ( )0 0; 0E X t i t tβ β δ δ⎡ ⎤ ⎡ ⎤′ ′= − ⋅ −⎣ ⎦ ⎣ ⎦Y (10)

Similarly we can get general formulas

( )( ) ( )( ) ( )( )

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

1 21 21

1 11

1

1 1 1

1 1

; ; ... ;

0 , , ,..., , ,

MMM

M MM

M

n n nm m mn n

m m m mn nM M

n n

i E X t X t X t

t t t t t t t t

β β β β

δ δ δ δ δ

+ +

+ +

⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥⋅ ⋅ ⋅ ⋅ =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦⎡ ⎤⎢ ⎥= − − − −⎢ ⎥⎢ ⎥⎣ ⎦

… … …Y (11)

Page 87: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.6. CHARACTERISTIC FUNCTIONS FROM THE CHARACTERISTIC FUNCTIONAL 75

3.6. Characteristic Functions from the Characteristic Functional

Another essential property of the characteristic functional is the ability to get characteristic functions, in the usual sense, for specific values of the time or spatial variable of the stochastic process. However more important is the possibility of calculating characteristic functions for the derivatives of the responces.

Using the definition of Section 2.1.3/Chapter 2. for the characteristic function, we have,

( ) ( ) ( ) ( ) ( )i x i xX t X tE e e P dxβ υ υφ υ

+∞

−∞

≡ = ∫ (1)

For the characteristic functional of the probability measure, we have, by definition,

( ) ( ),i x yx e dy x= ∈∫[

Y c [

Setting x zυ= , where υ∈ , and using Remark 2.5. we have

( ) ( ) ( ),i z y i tzz e dy e dt zυ υυ

−∞

= = ∈∫ ∫[

Y c c [ (2)

where

( ) ; ,z a y z y a= ∈ ≤c c [ , a ∈

Setting ( )0z t tδ= − we have

( ) ( ) ( ) ( ) ( )0 00;t t X ta y y t a P aδ − = ∈ ≤ =c c [ , a ∈ (3)

Thus,

( ) ( ) ( )( )0X t t tφ υ υδ= −Y , υ∈ (4)

For higher dimensions we have analogous formulas. More specifically, by setting to the characteristic functional 1 1 M Mx z zυ υ= + +… , where ( )1 2, , M

Mυ υ υ= ∈…υ and

( )1 2, , MMz z z= ∈…z [ we have

( ) ( )

( )

1 1 2 2

1 1 2 2

1 2

1 1

, , ,

, , ,

M M

M M

M

M M

i z y z y z y

i t t tz z z

z z

e dy

e d

υ υ υ

υ υ υ

υ υ+ + +

∞+ +

−∞

+ + =

= =

=

……

t

[

Y

c

c

(5)

where

( ) 1 2, , , 1 1 2 2; , , , , , ,

Mz z z M My z y a z y a z y a= ∈ ≤ ≤ ≤… …ac c [ (6)

Setting ( )i iz t tδ= − , 1, 2, ,i M= … and using Definition 2.2 we have

( ) ( ) ( ) ( ) ( ) ( ) 1 1 1 2 2, , ; , ,

M M Mt t t t a y y t a y t a y t aδ δ− − = ∈ ≤ ≤ ≤ =… …c c [

Page 88: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

76 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

( ) ( ) ( )1 , , MX t X tP a= … , ( )1 2, , M

Ma a a= ∈…a

Hence, ( ) ( ) ( ) ( ) ( )( )

1 1 1, , M M MX t X t t t t tφ υ δ υ δ= − + + −… …Yυ , ( )1 2, , MMυ υ υ= ∈…υ (7)

Additionally we can calculate characteristic functions for the time derivatives of the stochastic process. Setting to equations (5), (6) ( )i iz t tδ′= − , 1, 2, ,i M= … and using Definition 2.2 we

have

( ) ( ) ( ) ( ) ( ) ( ) 1 1 1 2 2, , ; , ,

M M Mt t t t a y y t a y t a y t aδ δ′ ′− −′ ′ ′= ∈ ≤ ≤ ≤ =… …c c [

( ) ( ) ( )1 , , MX t X tP a′ ′= … , ( )1 2, , M

Ma a a= ∈…a

Hence,

( ) ( ) ( ) ( ) ( )( )1 1 1, , M M MX t X t t t t tφ υ δ υ δ′ ′ ′ ′= − + + −… …Yυ , ( )1 2, , M

Mυ υ υ= ∈…υ (8)

Finally, we can prove the general formulae, in the same way as before. We have from Definition 2.2 and equation (5) for ( ) ( )in

i iz t tδ= − ,

( )( ) ( )( )( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 2

11

1 1 2 2, ,; , , M

n nMM

n n nM Mt t t t

a y y t a y t a y t aδ δ− −

= ∈ ≤ ≤ ≤ =…

…c c [

( ) ( ) ( )1 , , MX t X tP a′ ′= … , ( )1 2, , M

Ma a a= ∈…a

Hence we get the equality,

( )( ) ( )( )( ) ( ) ( ) ( ) ( )( )1

11

1 1, ,M

n nMM

n nM MX t X t

t t t tφ υ δ υ δ= − + + −…

…Yυ , M∈υ (9)

Thus we have general expressions for the calculation of characteristic functions from the characteristic functionals for the values of the stochastic process as well as for the derivatives.

Remark 6.1 : It is very useful for applications to work in the level of characteristic functional. Then all the above results gives us a method to interpret or visualize the resulting characteristic functional by taking the characteristic functions for finite dimensional subspaces and then calculate the Fourier transform to get the corresponding probability density functions.

Page 89: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.7. GAUSSIAN MEASURE 77

3.7. Gaussian Measure

Gaussian measures plays a very important role in the theory of probability measures in infinite dimensional spaces. Additionally, as we will see at Chapter 5, it is an essential tool, for the representation of the responses of stochastic dynamical systems. Their practical importance basically relies to the fact that we have analytical expressions for the infinite dimensional integrals of cylinder functions with respect to Gaussian measures. Another important feature is the localization of the probability mass into the infinite dimensional space of functions. In other words Gaussian measures have the property of kernel characteristic functionals, that it wll be discussed extensively at Chapter 5/Section 5.4. The last property in combination with the property of analytical computation of integrals gives us a very promising tool for the numerical solution of functional differential equations, discussed in Chapter 5. Hence its very essential to study extensively Gaussian measures in [ . We start with the following

Definition 7.1 : A Gaussian measure Gc in [ is a probability measure in [ such that for

each x ∈[ , the measurable function ,x i is normally distributed, i.e. there exists real numbers xm and xσ such that

( ) ( )2

21; ,2

x

x

t ma

xx

Ga y x y a e dtσ

πσ

−−

−∞

= ∈ ≤ = ∫c c [ (1)

We will now compute the mean and covariant operator for the Gaussian measure. We have,

( ), , Gm x z x dz x= ∈∫c

[

c [

Using Remark 2.5 the above integral turns out, to be

( )( )2

21,2

x

x

t m

x xx

m x t dt t e dt mσ

πσ

−∞ ∞ −

−∞ −∞

= = ⋅ =∫ ∫c c (2)

For the covariance operator we will have

( )2, , GC x x x z dz x= ∈∫c

[

c [

Using Remark 2.5, we will have

( )( )2

22 2 1,2

x

x

t m

xx

xC x x t dt t e dtσ σπσ

−∞ ∞ −

−∞ −∞

= = ⋅ =∫ ∫c c (3)

Concerning the characteristic functional of a Gaussian measure Gc we have the following Lemma 7.2 : Let Gc be a Gaussian measure in [ . Then its characteristic functional is

given by

( ),

,2

C x xi m x

G x e−

=c

cY (4)

Page 90: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

78 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

where mc is the mean of Gc and Cc the covariance operator of Gc .

Proof : Using Remark 2.5 we have

( ) ( ) ( ),i x y itG G xx e dy e dt= =∫ ∫

[ [

Y c c

where ( ) ; ,x Ga y x y a= ∈ ≤c c [ is the distribution of ,x i . Hence,

( )( )2

212

x

x

t mit

Gx

x e e dtσ

πσ

−−

= ∫[

Y

Make a change of variables first and use contour integration to conclude

( ) 2x

xim

G x eσ

−=Y .

But from (2) - (3)we have

,xm m x= c and ,x C x xσ = c

Hence

( ),

,2

C x xi m x

G x e−

=c

cY

Remark 7.3 : Since a probability measure is uniquely defined by its characteristic functional, then, in view of the above lemma, a Gaussian measure is uniquely defined by its mean and covariance operator.

For the characterization of a Gaussian measure we have the following

Theorem 7.4 [Prohorov] : Let 0x ∈[ and C ∈ Sc . Then ( ) 0,

,2

C x xi x x

x e−

=c

Y is the

characteristic functional of a Gaussian measure in [ .

Proof : Clearly ( )0 1=Y and Y is positive definite. Consider first the case 0 0x = . Then

( ),

2 11 Re 1 ,2

C x x

x e C x x−

− ≤ − ≤c

cY

because 1 ye y−− ≤ for all 0y ≥ .

Since 12

C ∈ Sc part (b) of Minlos-Sazonov Theorem. (Theorem 3.7) is trivially satisfied.

Therefore by the conclusion of the Theorem, there exists a Borel measure c such that

( ) ( ),i x yx e dy x= ∈∫[

Y c [

That is,

( ),

, 2C x x

i x ye dy e x−

= ∈∫c

[

c [

Page 91: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.7. GAUSSIAN MEASURE 79

Let ( ) ; ,x a y x y a= ∈ ≤c c [ be the distribution of ,x i . Then using Remark 2.4 we

have

( ),

2C x x

itxe dt e

−=∫

c

[

c

Hence xc is a normal distribution with mean 0 and variance ,C x xc . That is c is a Gaussian measure in [ . Consider now the case 0 0x ≠ . Let

( ),

2C x x

x e−

=c

Y

Then

( ) ( )0 ,i x xx e x=Y Y

By the case already proved, there exists a Gaussian measure c in [ such that

( ) ( ),i x yx e dy x= ∈∫[

Y c [

Define a Borel measure c in [ as follows

( ) ( ) ( )0 ,E E x E= − ∈c c U [

It is easy to see that c is a Gaussian measure and

( ) ( ) ( )0 ,, i x xi x ye dy e x x= =∫[

c Y Y

Finally, as we mention at the beginning, analytical integration of cylinder functions over Gaussian measures can be carried out. Hence for Gaussians measures we have

( ) ( ) ( ) ( )1 1 1, , , , , , , ,n

n n ng x y x y dy g t t dt dt= =∫ ∫… … …x[

[

c c

( ) ( ) ( ) [ ] [ ]( )1/ 2/ 2 112 det exp2n

n TC g C dπ−− −⎧ ⎫⎪ ⎪⎪ ⎪= − − −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

∫ u u m u m u .

Where, 1C− is the inverse for matrix ,ij i jC Cx x⎡ ⎤ =⎢ ⎥⎣ ⎦ , , 1, ,i j n= … , ( )1, , , , nm x m x= …m .

,m i is the mean value of the measure and ,Ci i is the correlation functional of the measure.

In Chapter 5/Section 5.1 we will give the precise formulation as well as the proof of the above result.

Page 92: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

80 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

3.8. Tatarskii Characteristic Functional

As we saw at the previous section the characteristic functional for the Gaussian measure can be represented by explicit formulae. There exist few other cases for which it is possible to present such formulas. But the problem of extending the class of random functions for which it is possible to write down the characteristic functional explicitly is important for many applications. If we posses an explicit formula for the characteristic functional that contains several arbitrary functions, it is possible to use this model for an approximate solution of different statistical problems. In this section we will present a characteristic functional representation, first introduced by V.I. Tatarskii (1995). This general expression creates a wide class of characteristic functionals. The above representation is based on the construction of a random function, which we will describe in the sequel. Let the random function depending on vector argument n∈r of the form

( )

( )

( )

0

1 1 1

0

0 with probabilityξ , with probability

;

ξ , with probabilityN

k k Nk

pg p

X

g p

β

=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪=⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

r r

r

r r

(1)

where the numbers Np obey the conditions

0Np ≥ 0

1NN

p∞

=

=∑ (2)

We assume the characteristic function

( ) ( )0

expNN

p i Nχ υ υ∞

=

= ∑ (3)

is known. The same information is available from the function

( ) ( )0

logN NN

N

A A p A i Aχ∞

=

Ψ ≡ = = −∑ (4)

The function ( )AΨ , the generating function of factorial moments, is related to the moments of

random N by the formula

( )

( )( ) ( )1 2 1n

N nn

d AN N N N n A

dA−Ψ

= − − − +… (5)

Thus,

( )

( )( ) ( )1

1 2 1n

nnA

d AN N N N n F

dA=

Ψ= − − − + ≡… (6)

Page 93: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.8. THE TATARSKII REPRESENTATION OF A CHARACTERISTIC FUNCTIONAL 81

The numbers nF are called the factorial moments of the random variable N . The amplitudes

kξ are assumed to be statistically independent and have the same probability distribution and characteristic function independent of k :

( ) ( )exp kiϕ υ υξ= (7)

We assume that all kr are statistically independent and have the same probability density function ( )kW r .Let us consider the characteristic functional for ( );X βr given by (2).

Substituting (1) into (2) we obtain

( ) ( ) ( )0

,expN

kkk

gx i x dξ=

⎡ ⎤⎢ ⎥= ⎢ ⎥⎣ ⎦∑ ∫Y r r r r (8)

We are able to average over ξ , and using (7) we obtain

( ) ( ) ( )( )0

,N

kk

gx x dϕ=

= ∏ ∫Y r r r r (9)

Here, N and kr remain random. Because all kr are statistically independent and have the same probability density function ( )kW r , averaging over kr results in

( ) ( ) ( ) ( )( )0

,N

k k kk

gx d W x dϕ=

= ∏∫ ∫Y r r r r r r (10)

All factors in this product differ only by the notation for the variable of integration and are equal to each other. Thus,

( ) ( ) ( ) ( )( ),N

gx d W x dϕ⎡ ⎤′ ′ ′= ⎢ ⎥⎣ ⎦∫ ∫Y r r r r r r (11)

To fulfil the averaging with respect to N , we can use (4) with the value A determined by the expression

( ) ( ) ( )( ),gA d W x dϕ′ ′ ′= ∫ ∫r r r r r r (12)

Substituting (12) into (11) we obtain

( ) ( ) ( ) ( )( )( ),gx d W x dϕ′ ′ ′= Ψ ∫ ∫Y r r r r r r (13)

This lead us to the following

Theorem 8.1 : Let a) : nW +→ , be an arbitrary non-negative function, that is normalized to 1 b) ( )ϕ υ be an arbitrary characteristic function

c) ( ), : n ng × →i i a completely arbitrary function and d) ( )AΨ be an arbitrary generating function of factorial moments, i.e. be of the form

( ) ( )logA i AΨ χ= − , where χ is an arbitrary characteristic function.

Then the functional

Page 94: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

82 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

( ) ( ) ( ) ( )( )( ),gx d W x dΨ ϕ′ ′ ′= ∫ ∫r r r r r rY

is a characteristic functional.

Remark 8.2 : Although the proof of the above theorem was carried out for the case of n∈r being a spatial variable the same results hold for the case when we substitute r with t being time variable.

For the calculations that follow we will assume that the functions are defined onto the time interval [ ]0,I T= .

3.8.1. Mean and Correlation Operators As we compute at Section 4 of the present chapter the mean value operator of a probability measure is

( )[ ], 0m z i z zδ=− ⋅ ∈c Y [

By direct calculation we can find that the mean operator will have the form

( ) ( ) ( ) ( ) ( )( ),, 1 0 g t tm z i W t z t dt dt zϕ ′′ ′ ′ ′= − ⋅Ψ ⋅ ⋅ ∈∫ ∫c [ (14)

Using the above result and equation (7) we will have that

( ) ( ) ( ) ( ) ( )( )00 ,; 1 0 g t tE X t i W t dtβ β ϕ⎡ ⎤ ′ ′= − ⋅Ψ ⋅ ⋅⎣ ⎦ ∫ (15)

Concerning the covariance operator, we have proved from Section 4 that

( )[ ]2, 0 , ,C z w z w z wδ=− ∈c Y [

So, by calculating the second Frechet derivative, we will wave

( )[ ] ( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( ) ( )( )

2221 2

1

2

1

,0 , 1 0

,1 0

kk

kk

g t tz z W t z t dt dt

g t tW t z t dt dt

δ ϕ

ϕ

=

=

⎡ ⎤⎡ ⎤ ′′′ ′ ′ ′=Ψ ⋅ ⋅ +⎢ ⎥⎣ ⎦ ⎣ ⎦

⎛ ⎞⎟⎜ ′′ ′′ ′ ′+Ψ ⋅ ⋅ ⋅ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

∏ ∫ ∫

∏∫ ∫

Y (16)

3.8.2. Characteristic functions As we compute at Section 6 the joint characteristic function at time instants 1 2, , , Mt t t… of a probability measure is

( ) ( ) ( ) ( ) ( )( )1 1 1, , M M MX t X t t t t tφ υ δ υ δ= − + + −… …Yυ , ( )1 2, , M

Mυ υ υ= ∈…υ

By direct calculation we can find that the joint characteristic function will have the form

( ) ( ) ( ) ( ) ( )( )

( ) ( ) ( )( )( )1 1 1, ,

11 , ,

M M MX t X t

MM

t t t t

g t t g t tW t dt

φ υ δ υ δ

ϕ υ υ

= − + + − =

= Ψ + +∫… …

Yυ , M∈υ (17)

Page 95: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.8. THE TATARSKII REPRESENTATION OF A CHARACTERISTIC FUNCTIONAL 83

For characteristic functions concerning time derivatives a of the stochastic process we have using equation (9)

( )( ) ( )( )

( ) ( ) ( ) ( ) ( )( )( ) ( ) ( )( )( )

11

1

1

1

1 1, ,

11 , ,

Mn nM

M

M

M

n nM MX t X t

n nMt M t

t t t t

g t t g t tW t dt

φ υ δ υ δ

ϕ υ υ

= − + + − =

= Ψ ∂ + + ∂∫…

Yυ , M∈υ (18)

3.8.3. The 2D Tatarskii Representation of a Characteristic Functional For applications of characteristic functionals connected with stochastic differential equations, it is usually needed to have representations of vector valued stochastic processes. In this section we generalize the Tatarskii representation for the two dimensional case. Based on the same arguments we can easily extend the generalization presented above to the n−dimensional case. We consider the random function depending on the n-dimensional vector argument r of the form

( )

( )

( ) ( )

( ) ( )

0

1 11 1 1

0 0

with probability0 , 0

,ξ , , with probability

;

ξ , , with probability,N N

k k k k Nk k

p

g h p

X

g h p

ζ

β

ζ= =

⎧⎪⎪⎪⎪⎛ ⎞⎪ ′ ′′ ⎟⎜⎪ ⎟⎜ ⎟⎪⎝ ⎠⎪⎪⎪⎪⎪⎪=⎨⎪⎪⎪⎪⎛ ⎞⎪ ⎟⎜⎪ ′ ′′⎟⎜⎪ ⎟⎜ ⎟⎪⎝ ⎠⎪⎪⎪⎪⎪⎩

∑ ∑

r rr r

r

r r r r

(19)

(there is a chance to generalize further by assuming different N and M for every component, but the things gets very complicated. For our purpose the above generalization is the most suitable.) The numbers Np obey the conditions

0Np ≥ , 0

1NN M

p∞

=

=∑ (20)

We assume the characteristic function

( ) ( ), 0

expNN M

p i Nχ υ υ∞

=

= ∑ (21)

is known. The same information is available from the function

( ), 0

N NN

N M

A A p A∞

=

Ψ ≡ = ∑ (22)

The r.v.’s ( ),k kξ ζ are assumed to be statistically independent and have probability distribution

and characteristic function independent of k :

( ) [ ]( ), exp k kiϕ υ ν υξ νζ= + (23)

Page 96: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

84 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

We assume that all ( ),k k′ ′′r r are statistically independent and have the same probability density

function ( ),k kW ′ ′′r r . Let us consider the characteristic functional for ( );X βr given by (2).

Substituting (19) into (2) we obtain

( ) ( ) ( ) ( ) ( )0 0

, ,, expN N

k kk kk k

gx y i x d h y dξ ζ= =

⎡ ⎤′′′⎢ ⎥= +⎢ ⎥⎣ ⎦

∑ ∑∫ ∫Y r r r r r r r r (24)

We are able to average over ,ξ ζ , and using (23) we obtain

( ) ( ) ( ) ( ) ( )( )0

, ,, ,N

k kk

gx y x d h y dϕ=

′ ′′= ∏ ∫ ∫Y r r r r r r r r (25)

Here, ,N M and ,k k′ ′′r r remain random. Because all ,k k

′ ′′r r are statistically independent and

have the same probability density function ( )k kW ′ ′′r r, , averaging over k′r and k

′′r results in

( ) ( ) ( ) ( ) ( ) ( )( )0

, ,, , ,N

k k k k kkk

gx y W x d h y d d dϕ=

′′ ′ ′′′ ′ ′′= ∏∫ ∫ ∫Y r r r r r r r r r r r r (26)

All factors in this product differ only by the notation for the variable of integration and are equal to each other. Thus,

( ) ( ) ( ) ( ) ( ) ( )( ), ,, , ,N

k k k k kk gx y W x d h y d d dϕ⎡ ⎤′′ ′ ′′′ ′ ′′= ⎢ ⎥⎢ ⎥⎣ ⎦∫ ∫ ∫Y r r r r r r r r r r r r (27)

To fulfil the averaging with respect to N , we can use (22) with the value A determined by the expression

( ) ( ) ( ) ( ) ( ), ,, ,r r r r

gA W x d h y d d dϕ′′ ′

⎛ ⎞⎟⎜′′ ′ ′′⎟′ ′ ′′⎜= ⎟⎜ ⎟⎟⎜⎝ ⎠∫ ∫ ∫ ∫r r r r r r r r r r r r

Substituting we obtain

( ) ( ) ( ) ( ) ( ) ( ), ,, , ,r r r r

gx y W x d h y d d dϕ′′ ′

⎛ ⎞⎛ ⎞ ⎟⎜ ⎟⎜ ⎟′′ ′ ′′⎟⎜ ′ ′ ′′⎜= Ψ ⎟⎟⎜ ⎜ ⎟⎟⎜ ⎟⎜ ⎟⎜ ⎝ ⎠⎝ ⎠∫ ∫ ∫ ∫Y r r r r r r r r r r r r (28)

This formula represents the characteristic functional for a random function of the type (19) Analogous generalizations for the n−dimensional case can be carried out leading to results of the form

( ) ( ) ( ) ( ) ( ) ( )1

11 1 1 1, ,, , , ,N

NN N N Nr r r r

gW x d g x d d dϕ⎡ ⎤⎛ ⎞⎟⎜⎢ ⎥⎟⎜= Ψ ⋅⋅ ⎟⎢ ⎥⎜ ⎟⎟⎜⎢ ⎥⎝ ⎠⎣ ⎦∫ ∫ ∫ ∫… … …r r r r r r r r r r r rxY

with N

∈ × ×…x [ [ .

Page 97: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.9. REDUCTION OF THE TATARSKII CHARACTERISTIC FUNCTIONAL 85

3.9. Characteristic Functionals derived from Tatarskii Functional

As we saw in the previous section the Tatarskii characteristic functional represents a wide class of stochastic processes. In what follows we will study how the general representation can be reduced to characteristic functions corresponding to useful measures for applications. These measures will be used at Chapter 5/Section 4 for the representation of the responses of the stochastic dynamical systems. These specific representations are very useful for this aim since Gaussian measures are very restricted, while, Tatarskii characteristic functional is very impractical for calculations, since it involves many unknown functions. The analysis presented below will follow EGOROV, A.D. & SOBOLEVSKY, P.I. & YANOVICH, L.A. (Functional Integrals: Approximate Evaluation and Applications). Consider the special form of Tatarksii characteristic functional with

( ) expA AΨ = (1)

( )1 !

nnn

n

ix xnσ

ϕ∞

=

=∑ (2)

and the function ( ),g ′r r depends only from the first argument r , i.e.

( ) ( ),g l′ =r r r (3)

Based on the above assumptions the characteristic functional takes the form

( ) ( )1

exp , , , ,!

nn

nn

ix K l x l xnσ∞

=

⎧ ⎫⎪ ⎪⎪ ⎪= ⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑ …Y (4)

where ( )1, , , ,n nK l x l x… is a real symmetric continuous n− linear form on timesn

× ×…[ [

and nσ ( )1, 2,n = … are real and complex parameters. Some nσ may be equal to zero here, and

the numbers of summands may be finite. Let us now consider some more restricted classes of functionals of form (4). Let h be a set with a finite measure d defined on it, a ∈[ , ( )uρ is a mapping from h into [ and parameters nσ are real. Then we may define

( ) ( ) ( )1

exp , ,!

nnn

n

ix i a x u x dunσ

ρ∞

=

⎧ ⎫⎪ ⎪⎪ ⎪= +⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑ ∫

h

Y d (5)

A possible example of such functional is

( ) ( )( ) ( )exp , ,x i a x g u x duρ⎧ ⎫⎪ ⎪⎪ ⎪= +⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

∫h

Y d (6)

where ( )g z is a function which is analytic in the vicinity of zero ((6) is reduced to (5), if we

set ( )0

n

n n nz

d g zi dz

σ=

= , 0 1 0σ σ= = ). The following two parameter family of functionals

Page 98: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

86 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

( ) ( )( ) ( )( ) ( ), ,1 1 1exp , 1 1i u x i u xx i a x e e duα ρ β ρ

α β α β

⎧ ⎫⎪ ⎪⎪ ⎪= + − − −⎨ ⎬⎪ ⎪−⎪ ⎪⎩ ⎭∫h

Y d (7)

where ,α β are real parameters, α β≥ , is the special case of (5). If 0β → , we obtain the functional

( ) ( ) ( ) ( ),2

1exp , 1 ,i u xx i a x e i u x duα ρ α ρα

⎧ ⎫⎪ ⎪⎪ ⎪⎡ ⎤= + − −⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎪ ⎪⎩ ⎭∫h

Y d (8)

which corresponds to Poisson measure. If 0α→ and 0β → , then we obtain the functional

( ) ( ) ( )21exp , ,

2x i a x u x duρ

⎧ ⎫⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∫h

Y d (9)

which corresponds to Gaussian measure. As we see at previous sections, the characteristic functional of a Gaussian measure has the form

( ) ( )1exp , ,2

x i m x K x x⎧ ⎫⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

Y (10)

where ( ),K x x is an positive definite bilinear one. One more example of functionals of type (6)

( ) ( )( ) ( ) ( )exp , ln 1 , ,x i a x i u x i u x duσ ρ σ ρ⎧ ⎫⎪ ⎪⎪ ⎪⎡ ⎤= − − +⎨ ⎬⎢ ⎥⎣ ⎦⎪ ⎪⎪ ⎪⎩ ⎭

∫h

Y d (11)

We would like to mention separately the class when [ is some set of functions ( )x t on

segment [ ]0,T ⊂ . Then

( ) ( )0

,T

a x a t x t dt= ∫ , ( ) ( ) ( )0

,T

ta u u a t dtρ ρ= ∫ (12)

where the functions ( )a t , ( )t uρ may also be the generalized ones. In this case a more special

form of the Poisson measure is for

1α= ,

( )[ ]1 , if ,

0 , otherwiset

t u Tuρ

⎧⎪ ∈⎪=⎨⎪⎪⎩,

and

[ ]0,T=h with ( )du duλ= ⋅d , λ ∈

Then

( ) ( ) ( )0

, ,T T

u

u x du x t dtdu a xρ ∗= =∫ ∫ ∫h

d

Setting a a∗= we will get

( ) ( ),exp 1ti u x

T

x e duα ρλ⎧ ⎫⎪ ⎪⎡ ⎤⎪ ⎪= −⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎪ ⎪⎩ ⎭∫Y (13)

Page 99: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.9. REDUCTION OF THE TATARSKII CHARACTERISTIC FUNCTIONAL 87

which is a simpler version of the general Poisson characteristic functional of equation (8). Another simplification, for the case when [ is some set of functions ( )x t on segment

[ ]0,T ⊂ can be made to equation (11) by choosing again as

( )[ ]1 , if ,

0 , otherwiset

t u Tuρ

⎧⎪ ∈⎪=⎨⎪⎪⎩,

and

[ ]0,T=h with ( )du du=d ,

Choosing a in a suitable way we can get the Gamma characteristic functional for the infinite dimensional Gamma distribution

( ) ( )( )exp ln 1 ,tT

x i u x duσ ρ⎧ ⎫⎪ ⎪⎪ ⎪⎡ ⎤= −⎨ ⎬⎢ ⎥⎣ ⎦⎪ ⎪⎪ ⎪⎩ ⎭∫Y (14)

Finally, for the case of [ being set of functions ( )x t on segment [ ]0,T ⊂ functional (6)

may be considered as the characteristic functional of a random process ( )X t which may

represented as follows,

( ) ( ) ( ) ( ) [ ], 0,tX t u du a t t Tρ= + ∈∫h

j (15)

where j is a stochastic orthogonal measure,

( )( ) ( ) ( )exp expE i A g Aβ λ λ⎡ ⎤ ⎡ ⎤=⎣ ⎦ ⎣ ⎦j d , A⊂h

( )( ) ( )2

2E A Aβ σ=j d , A⊂h

Another interesting class of functionals is of form (6) with

( ) ( )221

2 1i zb i zg z iaz z e dλ λ

π λλ

⎛ ⎞⎟⎜= − + − − ⎟⎜ ⎟⎜⎝ ⎠+∫ (16)

where a ∈ , 0b≥ , ( ) ( )d dMπ λ λ= for 0λ< and ( ) ( )d dNπ λ λ= for 0λ> ; ( )M λ ,

( )N λ satisfy the following conditions

1. ( )M λ and ( )N λ are nondecreasing functions on ( ),0−∞ and ( )0,∞ ,

respectively, 2. ( ) ( ) 0M N−∞ = ∞ = ,

3. ( )0

2dMε

λ λ−

<∞∫ , ( )2

0

dNε

λ λ <∞∫ for any 0ε> .

If [ ]0,T=h , ( )du du=d , then bearing in mind (12), we obtain

Page 100: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

88 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

( ) ( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( ) ( )

0 0 0 0

20 0 0

exp ,2

exp 11

T T T T

s

T T T

s s

bx ia u x s dsdu B t s x t x s dsdt

ii u x s ds u x s ds du d

ρ

λλ ρ ρ π λ

λ

⎧⎪⎪= − +⎨⎪⎪⎩⎫⎡ ⎤⎧ ⎫ ⎪⎪ ⎪ ⎪⎪ ⎪⎢ ⎥ ⎪+ − −⎨ ⎬ ⎬⎢ ⎥⎪ ⎪ ⎪+⎢ ⎥⎪ ⎪ ⎪⎩ ⎭⎣ ⎦ ⎪⎭

∫ ∫ ∫ ∫

∫ ∫ ∫ ∫

Y

(17)

where ( ) ( ) ( )20

,T

s tB t s u u duσ ρ ρ= ∫ , ( )22 b dσ λ π λ= +∫ . This functional is the characteristic

one of the random process

( ) ( )0

T

t sX t s dρ ξ= ∫ , (18)

where sξ is a homogeneous process with independent increments which satisfies the condition

0 0ξ = . Note that (17) for

( ) ( ) 0M Nλ λ= =

gives rise to functionals of type (9); it gives rise to functionals of type (8) for

0b = , ( ) 0M λ = , ( )if 1

0 otherwisea

λ⎧− ≤⎪⎪=⎨⎪⎪⎩

and to functionals of type (11) for

0b = , ( ) 0M λ = , ( ) 1 exp , 0N dλ

υλ υ α

υ α

⎛ ⎞⎟⎜=− − >⎟⎜ ⎟⎜⎝ ⎠∫ .

We shall mention still another characteristic functional which corresponds to the measure which we shall call the Abel measure,

( ) ( )11exp 1 ,

2x K x x

−⎧ ⎫⎪ ⎪⎛ ⎞⎪ ⎪⎟⎜= + ⎟⎨ ⎬⎜ ⎟⎜⎪ ⎪⎝ ⎠⎪ ⎪⎩ ⎭Y (19)

The characteristic functional of the Cauchy measure,

( )1

exp ,j jj

x a x e∞

=

⎧ ⎫⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑Y , (20)

where ja are positive numbers, 1

jj

a∞

=

<∞∑ , je , 1, 2,j = …, is a basis in [ , is an example

of the characteristic functional which cannot be represented in form (4). Characteristic functional of form (4) give examples of measures, when the number of summands in the exponent is finite, the last summand has the index 2n p= , 1p> and

( ) 1 22 1 p p

pσ σ+= − . In particular let us mention the case, when

Page 101: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.9. REDUCTION OF THE TATARSKII CHARACTERISTIC FUNCTIONAL 89

( )( )

( ) ( )2

2exp ,

2 !

pp

x u x dup

σρ

⎧ ⎫⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∫h

Y d , (21)

where the notation is the same as for (5). For the space of functions on [ ]0,T , (21) will assume

the form,

( )( )

( ) ( )2

2

0

exp2 !

pTp

tx u x t dt dup

σρ

⎧ ⎫⎪ ⎪⎡ ⎤⎪ ⎪⎪ ⎪⎢ ⎥= −⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎪ ⎪⎣ ⎦⎪ ⎪⎩ ⎭∫ ∫h

Y , (22)

Finally, when there is a finite series in the exponent of (4) and the last summand enters with the imaginary term then we obtain pseudomeasures (see EGOROV, A.D. & SOBOLEVSKY, P.I. & YANOVICH, L.A. , Functional Integrals: Approximate Evaluation and Applications [page.5] for details), namely, the Feynman measures. For example,

( ) ( ) ( )2

exp ,2ix u x duρ

⎧ ⎫⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∫h

Y d , (23)

in particular,

( ) ( ) ( ) ( )2

0

exp2

T

tix u x t dt duρ

⎧ ⎫⎪ ⎪⎡ ⎤⎪ ⎪⎪ ⎪⎢ ⎥= −⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎪ ⎪⎣ ⎦⎪ ⎪⎩ ⎭∫ ∫h

Y d . (24)

Page 102: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

90 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

3.10. References

ASH, R.B., 1972, Measure, Integration and Functional Analysis. Academic Press. ASH, R.B., 2000, Probability and Measure Theory. Academic Press. BERAN M.J., 1968, Statistical Continuum Mechanics. Interscience Publishers. DORDNOV, A.A., 1977, Approximate Computation of Characteristic Functionals of Some Classes of Probability Measures. Issledovaniya po Prikladnoi Matematike 4, 83-91. EGOROV, A.D. & SOBOLEVSKY, P.I. & YANOVICH, L.A., 1993, Functional Integrals: Approximate Evaluation and Applications. Kluwer Academic Publishers. FOMIN, S.V., 1970, Some New Problems and Results in Nonlinear Functional Analysis. Mathematika 25, 57-65. FRIEDRICHS, K.O. & SHAPIRO. H.N. et al., 1976, Integration of Functionals. Courant Institute of

Mathematical Sciences. FRISTEDT, B. & GRAY. L., 1997, A Modern Approach to Probability Theory. Birkhäuser. GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1996, Theory of Random Processes. Dover Publications. GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1974, The Theory of Stochastic Processes I. Spinger. GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1975, The Theory of Stochastic Processes II. Spinger. GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1975, The Theory of Stochastic Processes III. Spinger. HOPF, E., 1952, Statistical Hydromechanics and Functional Calculus. Journal of Rational Mechanics and Analysis 1, 87-123. KANDELAKI, N.P. & SAZONOV, V.V., 1964, On Central Limit Theorem for Random Elements with Values in Hilbert Spaces. Theory of Probability and Its Applications, IX, 38-46. KLIMOV, G., 1986, Probability Theory and Mathematical Statistics, MIR Publishers. KOLMOGOROV, A.N., 1956, Foundation of the Theory of Probability. KOTULSKI, Z., 1989, Equations for the Characteristic Functional and Moments of the Stochastic Evolutions with an Application. SIAM J. APPL. MATH., 49, 296-313. KOVALENKO, I.N. & KUZNETSOV, N.YU. & SHURENKOV, V.M., 1996, Models of Random

Processes. CRC Press. KUO, H.H., 1975, Gaussian Measures in Banach Spaces, Springer. LANGOUCHE, F. & ROEKAERTS, D. & TIRAPEGUI, E., 1979, Functional Integral Methods for Stochastic Fields. Physica, 95A, 252-274. MONIN. A.S. & YAGLOM, A.M., 1965, Statistical Fluid Mechanics: Mechanics of Turbulence.

MIT Press. PARTHASARATHY, K.R., 1967, Probability Measures in Metric Spaces. Academic Press. PFEFFER, W.F., 1977, Integrals and Measures. Marcel Dekker, INC. PROHOROV, YU.V., 1956, Convergence of Random Processes and Limit Theorems in Probability Theory. Theory of Probability and Its Applications, I, 157-214. PROHOROV, YU.V. & ROZANOV, YU.A., 1969, Probability Theory. Springer. PROHOROV, YU.V. & SAZONOV, V.V., 1961, Some Results Associated with Bochner’s Thoerem. Theory of Probability and Its Applications, VI, 82-87. PUGACHEV, V.S. & SINITSYN , 1987, Stochastic Differential Systems. John Wiley &Sons.

Page 103: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

3.10. REFERENCES 91

ROSEN, G., 1967, Functional Integration Theory for Incompressible Fluid Turbulence. The Physics of Fluids, 10, 2614-2619. ROSEN, G. & OKOLOWSKI, J.A. & ECKSTUT, G., 1969, Functional Integration Theory for Incompressible Fluid Turbulence. II. Journal of Mathematical Physics, 10, 415-421. ROSEN, G., 1960, Turbulence Theory and Functional Integration I. The Physics of Fluids, 3, 519-524. ROSEN, G., 1960, Turbulence Theory and Functional Integration II. The Physics of Fluids, 3, 525-528. SAZONOV, V.V., 1958, A Remark on The Characteristic Functional. Theory of Probability and Its Applications, III, 188-192. SIMON, B., 1979, Functional Integration and Quantum Physics. Academic Press. SKOROKHOD, A.V., 1974, Integration in Hilbert Spaces. Spinger. SKOROKHOD, A.V., 1984, Random Linear Operators. D. Reidel Publishing Company. SOBCZYK, K., 1991, Stochastic Differential Equations. Kluwer Academic Publishers. SOIZE, C., 1994, The Fokker-Planck Equation and its Explicit Steady State Solutions. World

Scientific. TATARSKII, V.I., 1995, Characteristic Functionals for one Class of Non-Gaussian random Functions. Waves in Random Media, 5, 243-252. TESTA, F.J. & ROSEN, G., 1974, Theory for Incompressible Fluid Flow. Journal of The Franklin Insitute, 297, 127-133. VISHIK, M.J. AND FURSIKOV, A.V., 1980, Mathematical Problems of Statistical Hydromechanics.

Kluwer Academic Publishers. VOLTERRA, V., 1959, Theory of Functionals and of Integral and Integro-Differential Equations.

Dover Publications. VULIKH, B.Z., 1976, A Brief Course in the Theory of Functions of a Real Variable. MIR

Publishers. WÓDKIEWICZ, K., 1981, Functional Representation of Non-Markovian Probability Distribution in Statistical Mechanics. Physics Letters 84A, 56-58.

Page 104: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

92 CHAPTER 3 PROBABILITY MEASURES AND THEIR CHARACTERISTIC FUNCTIONALS IN HILBERT SPACES

Page 105: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

93

Chapter 4

Stochastic Calculus – Principles and Results In various problems of mathematical physics, many physical quantities are represented using stochastic processes. In the context of mathematical modelling we come naturally to concepts such us, continuity, derivatives and integrals of the functions that represent these physical quantities. It is therefore essential to define the above concepts for stochastic processes. In this direction three kinds of convergence have been developed, namely

• Mean Square Convergence • Convergence in Probability • Almost Sure Convergence

The definitions of the above are immediate generalizations of the convergence of sequences of random variables, given at Chapter 1/Section 1.3.5. In this chapter we will present the basic principles and results of mean square calculus. Using this calculus we reduce the problem of differentiation or integration of a stochastic process to the differentiation or integration of the covariance function. In Section 4.1.1-4.1.4 we present some basic definitions and results concerning the classical notions of m.s.-convergence, continuity and differentiation of second-order stochastic processes. Sections 4.1.5 deals with the issue of m.s.-integration of a stochastic process with respect to another (independent) process and Section 4.1.6 for the case of integration over martingales with non-anticipative dependence. Section 4.2 includes some fundamental results associated with analytical properties of sample functions, such as continuity, differentiation and integration. In what follows we assume the existence of a probability space ( )( )Ω , Ω ,U c . The stochastic process ( ):X C IΩ→ is

defined on it, and the induced probability space is ( ) ( )( ) ( )( ), , C IC I C IU c .

Page 106: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

94 CHAPTER 4 STOCHASTIC CALCULUS – PRINCIPLES AND RESULTS

4.1. Process of 2nd order – Mean-Square Calculus 4.1.1. Preliminaries

Definition 1.1 : A stochastic process ( );X t β , t I∈ is called a second-order or Hilbert

stochastic process, iff

( )( )2;E X tβ β < ∞

We shall denote this space of stochastic processes as ( )Im . (1)

The above definition can easily be extended for stochastic processes taking complex or vector values.

Remark 1.2 : The Schwarz inequality (Chapter 1/Section 1.3.2) implies that a second order stochastic process always has a covariance function.

Theorem 1.3 [Completeness property] : For a sequence nX of stochastic processes in

( )Im to converge in ( )Im to some limit, it is necessary and sufficient that the sequence

nX is fundamental. In other words the space ( )Im is complete.

Proof: The necessity of the above theorem is general for metric spaces as a consequence of the triangle inequality. Let us prove the sufficiency. Since

( ) ( ) [ ]( )22 2n n n nE X E X E X Xβ β β

′ ′− ≤ −

if a sequence is a fundamental sequence, the sequence nX is bounded; thus ( )2nE X cβ < .

Furthermore the fact that the sequence is fundamental in ( )Im implies that it is fundamental

in measure. Consequently (See GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1996, Theory of Random Processes) the sequence nX contains a subsequence

knX that converges almost uniformly

in . Using Fatou’s lemma, we obtain

( ) ( ) ( )2 2 2 2lim limk kn nc E X E X E Xβ β β≥ ≥ =

from which it follows that ( )X I∈m . Let ε denote an arbitrary positive number. Since the

sequence nX is fundamental, it follows that for all 0n n≥ ,

( ) [ ]( )2 2 2lim limk kn n n n n

k kE X X E X X E X Xβ β βε

→∞ →∞

⎛ ⎞⎡ ⎤ ⎡ ⎤ ⎟⎜≥ − ≥ − = −⎟⎢ ⎥ ⎢ ⎥⎜ ⎟⎣ ⎦ ⎣ ⎦⎝ ⎠

which proves the convergence of the sequence nX to ( )X I∈m .

4.1.2 Mean-Square Convergence Definition 1.4 : Let ( );X t β , t I∈ be a second-order stochastic process. A process ( );X t β

converges in the mean-square (m.s.) to a random variable ( )A β as 0t t→ iff

Page 107: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

4.1. PROCESS OF 2ND ORDER – MEAN-SQUARE CALCULUS 95

( ) ( )( )0

2lim ; 0t t

E X t Aβ β β→

⎡ ⎤− =⎣ ⎦ (2)

For the sake of convenience, the m.s. convergence will be denoted as

( ) ( )( ) ( ) ( )0 0

2lim ; 0 l.i.m. ;t t t t

E X t A X t Aβ β β β β→ →

⎡ ⎤− = ⇔ =⎣ ⎦ (3)

Remark 1.5 : Necessary and sufficient condition for the m.s.-convergence of the s.p. ( );X t β , t T∈ , when 0t t→ , is the Cauchy criterion,

( ) ( )( )1 02 0

21 2lim ; ; 0

t tt t

E X t X tβ β β→→

⎡ ⎤− =⎣ ⎦ (4)

since the space of second order stochastic processes is a Hilbert space (Theorem 1.3). Theorem 1.6 : Let ( );X t β , t I∈ be a second-order stochastic process. A process ( );X t β

converges in the mean-square to a random variable ( )A β as 0t t→ iff the covariance function

( )1 2,XXR t t convergence to a finite limit as 1 0 ,t t→ 2 0t t→ .

Proof : Let ( ) ( ). .; m sX t Aβ β⎯⎯→ , as 0t t→ . Then,

( ) ( ) ( )( ) ( ) ( ) ( ) ( )( )( ) ( ) ( )( ) ( ) ( ) ( )( )

21 2 1 2

1 2

; ; ; ;

; ;

E X t X t A E X t A X t A

E X t A A E X t A A

β β

β β

β β β β β β β

β β β β β β

⎡ ⎤ ⎡ ⎤− = − − +⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤+ − + −⎣ ⎦ ⎣ ⎦

Using Schwartz inequality one obtains

( ) ( ) ( )( ) ( )( )21 2 1 2, ; ;XXR t t E X t X t E Aβ ββ β β= → as 1 0 ,t t→ 2 0t t→ .

Conversely, let ( )1 2,XXR t t c→ as 1 0 ,t t→ 2 0t t→ , where c <∞ . Then,

( ) ( )( )( )( ) ( )( ) ( ) ( )( )

( ) ( ) ( )

21 2

2 21 2 1 2

1 1 2 2 1 2

; ;

; ; 2 ; ;

, , 2 , 2 0XX XX XX

E X t X t

E X t E X t E X t X t

R t t R t t R t t c c c

β

β β β

β β

β β β β

⎡ ⎤− =⎣ ⎦

= + − =

= + − → + − =

as 1 0 ,t t→ 2 0t t→ . This relationship and the Cauchy m.s. criterion imply that ( ) ( ). .; m sX t Aβ β⎯⎯→ , as 0t t→ .

Preposition 1.7 : The most important properties of m.s.-convergence are a) For deterministic functions the m.s.-convergence is equivalent with the simple

convergence. b) The operator l.i.m. is linear , i.e. if a and b are deterministic constants, ( );X t β ,

( );Y t β , t T∈ two second-order stochastic processes and the limits

( ) ( )0

0l.i.m. ;t t

X t Xβ β→

= and ( ) ( )0

0l.i.m. ;t t

Y t Yβ β→

= exists, then

( ) ( ) ( ) ( )0

0 0l.i.m. ; ;t t

a X t bY t a X bYβ β β β→

+ = + (5)

Page 108: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

96 CHAPTER 4 STOCHASTIC CALCULUS – PRINCIPLES AND RESULTS

c) If ( ) ( )0

0l.i.m. ;t t

X t Xβ β→

= exists, then the limit ( )( )0

lim ;t t

E X tβ β→

exists and the

following relation holds,

( )( ) ( )( )00

l.i.m. ; lim ;t tt t

E X t E X tβ ββ β→→

= (6)

i.e. the operators ( )Eβ i and [ ]l.i.m. i are permutable.

d) If ( );X t β , ( );Y t β , t T∈ are two second-order stochastic processes and the limits

( ) ( )0

0l.i.m. ;t t

X t Xβ β→

= and ( ) ( )0

0l.i.m. ;t t

Y t Yβ β→

= exists, then the limit

( )( )0

lim ;t t

E X tβ β→

exists and the following relation holds,

( ) ( )( ) ( ) ( )( )00

l.i.m. ; ; lim ; ;t tt t

E X t Y t E X t Y tβ ββ β β β→→

= (7)

Proof : The above assertions are direct sequences of the definition of m.s.-convergence. For details we refer to ATHANASSOULIS, G.A., (Stochastic Modeling and Forecasting of Ship Systems. Lecture Notes NTUA). 4.1.3. Mean-Square Continuity

Definition 1.8 : A second-order stochastic process ( );X t β , t I∈ , is continuous in a mean

square sense (m.s.-continuous) at t I∈ , iff

( ) ( )( )0

20lim ; ; 0

t tE X t X tβ β β

→⎡ ⎤− =⎣ ⎦ (8)

A s.p. ( );X t β is m.s.-continuous on I , if it is m.s.-continuous at every point t I∈ .

Theorem 1.9 : (m.s.-continuity criterion) A second-order s.p. ( );X t β , t I∈ is m.s.-continuous

at t I∈ , iff the covariance function ( )1 2,XXR t t is continuous at 1 2t t= .

Proof : Let ( )1 2,XXR t t be continuous at 1 2t t= . Then

( ) ( )( ) ( ) ( ) ( ) ( )( )( ) ( ) ( ) ( )

2

0

; ; ; ; ; ;

, , , , 0hXX XX XX XX

E X t h X t E X t h X t X t h X t

R t h t h R t h t R t t h R t t

β ββ β β β β β

⎡ ⎤ ⎡ ⎤ ⎡ ⎤+ − = + − ⋅ + − =⎣ ⎦ ⎣ ⎦ ⎣ ⎦

= + + − + − + + ⎯⎯⎯→

therefore ( );X t β is m.s. continuous at t .

Let us assume that ( );X t β is m.s.-continuous at t . Since

( ) ( ) ( ) ( ) ( ) ( )( )( ) ( ) ( ) ( ) ( ) ( )( )

, , ; ; ; ;

; ; ; ; ; ;

XX XXR t h t k R t t E X t h X t X t k X t

E X t h X t X t X t k X t X t

β

β

β β β β

β β β β β β

⎡ ⎤ ⎡ ⎤+ + − = + − ⋅ + − +⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤+ + − + + −⎣ ⎦ ⎣ ⎦

use of the Schwartz inequality implies that ( )1 2,XXR t t is continuous at t T∈ .

Page 109: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

4.1. PROCESS OF 2ND ORDER – MEAN-SQUARE CALCULUS 97

Theorem 1.10 : If the covariance function ( )1 2,XXR t t is continuous at every diagonal point

1 2t t= then it is continuous everywhere on I I× .

Proof :

( ) ( )( ) ( )( ) ( ) ( )( )( ) ( ) ( )( )( ) ( ) ( )( )

1 2 1 2

1 2 1 2

1 1 2

2 2 1

, ,

; ; ; ;

; ; ;

; ; ;

XX XXR t h t k R t t

E X t h X t k E X t X t

E X t h X t X t k

E X t k X t X t

β β

β

β

β β β β

β β β

β β β

+ + − =

= + + − ≤

⎡ ⎤≤ + − ⋅ + +⎣ ⎦

⎡ ⎤+ + − ⋅⎣ ⎦

Using the Schwartz inequality it is easily seen that the above expression tends to zero as 0h → and 0k → .

Remark 1.11 : It should be emphasized that m.s.-continuity of a s.p. ( );X t β , t I∈ implies that

it is continuous with respect to probability. Indeed the Tchebycheff inequality gives

( ) ( )( )( ) ( )( )2

0 0

0 0 2

; ;; ;

E X t h X tX t h X t

β β ββ β ε

ε

+ −+ − > ≤c (9)

The m.s.-continuity does not, however, imply continuity of the realizations of the process. As an example one can take the Poisson process. Its covariance function ( ) ( )1 2 1 2, min ,XXR t t t tν= is continuous for all 1 2, 0t t > , but almost all realizations of this process have discontinuities over a finite interval of time (see Chapter 2/Section 2.2.1.c). 4.1.4. Mean-Square Differentiation

Definition 1.12 : A second-order s.p. ( );X t β , t I∈ , has mean-square derivative ( );X t β′ at

0t I∈ , if there is a r.v. ( )0 ;X t β′ such that

( ) ( ) ( )2

0 000

; ;lim ;h

X t h X tE X t

hβ β β

β→

⎛ ⎞⎡ ⎤+ −⎜ ⎟ ′=⎢ ⎥⎜ ⎟⎣ ⎦⎝ ⎠

(10)

A s.p. ( );X t β is m.s.-differentiable on I , if it is m.s.-continuous at every point t I∈ . Higher

order derivatives can be defined analogously.

Theorem 1.13 [m.s.-Differentiation criterion] : A second-order s.p. ( );X t β , t I∈ is m.s.-differentiable at 0t I∈ , iff there exist the second-order derivative of the covariance function

( )2

1 21 2

,XXR t tt t∂

∂ ∂.

Proof : This theorem follows directly from Theorem 1.6, which asserts that condition (10) is satisfied iff

Page 110: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

98 CHAPTER 4 STOCHASTIC CALCULUS – PRINCIPLES AND RESULTS

( ) 00

,h k kh

E Y Y finite limitβ→→

⋅ ⎯⎯⎯→

where,

( ) ( )1 ; ;hY X t h X th

β β⎡ ⎤= + −⎣ ⎦ (11)

But

( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( )( )

1 ; ; ; ;

1 , , , ,

h k

XX XX XX XX

E Y Y E X t h X t X t k X tkh

R t h t k R t t k R t h t R t tkh

β β β β β β⎛ ⎞⎡ ⎤ ⎡ ⎤⋅ = + − ⋅ + − =⎜ ⎟⎣ ⎦ ⎣ ⎦⎝ ⎠

= + + − + − + +

Therefore, in order to ensure the existence if the m.s.-derivative ( );X t β′ of the s.p. ( );X t β is

necessary and sufficient that the above expression has finite limit, i.e. that the second-order derivative defined as

( ) ( ) ( ) ( ) ( )( )2

1 2 01 2 0

1, lim , , , ,XX XX XX XX XXkh

R t t R t h t k R t t k R t h t R t tt t kh→

⎡ ⎤∂⎢ ⎥= + + − + − + +⎢ ⎥∂ ∂ ⎣ ⎦

exists and is finite at the point ( ),t t .

Preposition 1.14 : The m.s.-derivatives have the following properties: a) m.s.-differentiability of ( );X t β at 0t I∈ , implies m.s.-continuity of ( );X t β at 0t I∈ .

b) If ( );X t β , ( );Y t β , t I∈ are two second-order s.p. m.s.-differentiable, then the m.s.-

derivative of ( ) ( ); ;a X t bY tβ β+ exists and

( ) ( ) ( ) ( ); ;; ;

dX t dY td a X t bY t a bdt dt dt

β ββ β+ = + (12)

where a and b are constants. c) If ( );X t β , t I∈ is n-times m.s.-differentiable on I , the means of these m.s.-

derivatives of ( );X t β exist on I and moreover the following equation holds,

( ) ( )( );;

n n

n n

d X t dE E X tdt dt

β βββ

⎛ ⎞=⎜ ⎟⎜ ⎟

⎝ ⎠ (13)

Proof : a) is direct consequence of Theorems 1.9 and 1.13. b) comes as result of Preposition 1.7b and the definition of m.s.-derivative. c) can be proved using Preposition 1.7c and the definition of m.s.-derivative. Theorem 1.15 : If ( );X t β , t I∈ has m.s.-derivative for all t I∈ , then the partial derivatives

( ) ( ) ( )21 2 1 2 1 2

1 2 1 2

, , ,, ,XX XX XXR t t R t t R t t

t t t t∂ ∂ ∂

∂ ∂ ∂ ∂ exist and

( ) ( ) ( )( ) ( )1 21 2 1 2

1

,, ; ; XX

X X

R t tR t t E X t X t

tβ β β′

∂′= =

∂ (14)

Page 111: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

4.1. PROCESS OF 2ND ORDER – MEAN-SQUARE CALCULUS 99

( ) ( ) ( )( ) ( )1 21 2 1 2

2

,, ; ; XX

XX

R t tR t t E X t X t

tβ β β′

∂′= =

∂ (15)

( ) ( ) ( )( ) ( )1 21 2 1 2

1 2

,, ; ; XX

X X

R t tR t t E X t X t

t tβ β β′ ′

∂′ ′= =

∂ ∂ (16)

Proof : If ( );X t β , t I∈ has m.s.-derivative ( );X t β′ at t I∈ , then

( ) ( )( ) ( ). .1 ; ; ;m sX t h X t X th

β β β′+ − ⎯⎯→ as 0h →

Therefore, ( ) ( ) ( ) ( )

( )1 2 1 2 1 120 0

, , ; ;lim lim ;XX XX

h h

R t h t R t t X t h X tE X t

h hβ β β

β→ →

⎛ ⎞+ − + − ⎟⎜ ⎟= =⎜ ⎟⎜ ⎟⎜⎝ ⎠

( ) ( )( ) ( ) ( )( )1 1

2 1 20

; ;l.i .m. ; ; ;

h

X t h X tE X t E X t X t

hβ ββ β

β β β→

⎛ ⎞+ − ⎟⎜ ′⎟= =⎜ ⎟⎜ ⎟⎜⎝ ⎠

The above implies that the derivative ( )1 2

1

,XXR t tt

∂∂

exists and eq.(14) holds.

Similarly, it can be shown that the derivative ( )1 2

2

,XXR t tt

∂∂

exists and the formula (15) holds.

Finally,

( ) ( )( )

( ) ( )1 2 1 2 2 210 0

1 1

, , ; ;1lim lim ;XX XX

k k

R t t k R t t X t h X tE X t

k t t kβ β β

β→ →

⎛ ⎞ ⎛ ⎞∂ + ∂ + −⎟ ⎟⎜ ⎜ ′⎟ ⎟− = =⎜ ⎜⎟ ⎟⎜ ⎜⎟ ⎟⎜ ⎜∂ ∂⎝ ⎠ ⎝ ⎠

( ) ( )( )1 2; ;E X t X tβ β β′ ′=

shows that formula (16) is valid. Remark 1.16 : Analogous results for the derivatives of higher order can be easily derived. The covariance function of the derivatives of order k and l of a given process is expressed by the general formulas

( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )1 2 1 2 1 21 2

, ; ; ,k l

k lk l

XXk lX XR t t E X t X t R t t

t tβ β β

+∂= =

∂ ∂ (17)

if appropriate m.s.-derivatives exist.

Let us assume that ( )1 2,XXR t t has an infinite number of derivatives on I I× . Let

( ) ( ) ( ) ( ) ( ); 0; 0; 0;1 !

nn

nt tX t X X X

nβ β β β′= + + +… . Simple transformation of the expression

( ) ( )( )2; ;nE X t X tβ β β− leads to the following

Theorem 1.17 : A second-order s.p. ( );X t β , t I∈ is m.s.-analytic iff its covariance function

( )1 2,XXR t t is analytic at each diagonal point ( ),t t I I∈ × .

Remark 1.18 : As in the case of m.s.-continuity it should be emphasized that m.s.-differentiation of a s.p. ( );X t β , t I∈ does not implies differentiation of the realizations of the

Page 112: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

100 CHAPTER 4 STOCHASTIC CALCULUS – PRINCIPLES AND RESULTS

stochastic process. For the conditions required for differentiation of the realizations see Section 2.

Remark 1.19 : It follows directly from Theorem 1.13 that, a weakly-stationary s.p. ( );X t β , t I∈ is differentiable at 0t I∈ iff its covariance function ( )XXR τ , 1 2τ t t= − , has the

derivative of a second order at 0τ = . In this case formula (17) reduce to the following one

( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( )1 2 1 2, ; ; 1k l

k lkk l

XXk lX X

dR t t E X t X t R τdτ

β β β+

+= = − (18)

4.1.5. M.S. Integration I – Integration over an Independent Stochastic Process Let the second-order stochastic processes ( );X t β , t I∈ and ( );tΨ β , t I∈ . In what follows

we will consider, for various cases of the above s.p.’s, the meaning of the integral

( ) ( ) ( ); ;I

X t d tβ β Ψ β= ∫I (19)

One very important case, that gives us many of the known integrals, of practical importance is when the s.p.’s ( );X t β , t I∈ and ( );tΨ β , t I∈ , are mutually independent, i.e.

( ) ( )( ) ( )( ) ( )( ); ; ; ;E X t Ψ t E X t E Ψ tβ β ββ β β β⋅ = ⋅

For this case we have the following

Theorem 1.20 : Let ( );X t β , t I∈ and ( );tΨ β , t I∈ be orthogonal second-order stochastic

processes. Then the s.p. ( );X t β is integrable with respect to ( );tΨ β iff

1. ( )1 2,XXR t t is integrable on I I×

2. ( )1 2,R t tΨΨ has bounded variation on I I×

In this case we have,

( ) ( ) ( ) ( )2

21 2 1 2; ; , ,XX

I I I

E X t d t R t t d R t tβΨΨβ β

×

⎛ ⎞⎡ ⎤ ⎟⎜ ⎟⎜⎢ ⎥ ⎟=⎜ ⎟⎢ ⎥⎜ ⎟⎜ ⎟⎜⎢ ⎥⎣ ⎦⎝ ⎠∫ ∫∫Ψ (20)

Proof : Let [ ],a b I⊂ be a finite interval and let the points 0 1, , , nt t t… define a partition of [ ],a b

such that 0 1 na t t t b= < < < =…

Let 1,i i it tξ +⎡ ⎤∈ ⎣ ⎦ . We form the random variable

( ) ( ) ( ) ( )1

11

; ; ;n

n i i ii

X t tβ ξ β Ψ β Ψ β−

+=

⎡ ⎤= −⎢ ⎥⎣ ⎦∑I (21)

By virtue of Theorem 1.6 ( ) ( ). .m sn β β⎯⎯→I I iff ( ) ( )( ) ( )( )2. .m s

n mE Eβ ββ β β⎡ ⎤⎯⎯→ ⎣ ⎦I I I . Thus

we have from eq. (21),

Page 113: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

4.1. PROCESS OF 2ND ORDER – MEAN-SQUARE CALCULUS 101

( ) ( )( )

( ) ( ) ( ) ( ) ( ) ( )( )1 1

1 11 1

; ; ; ; ; ;

n m

m nn m m m n ni j j j i i

j i

E

E X X t t t t

β

β

β β

ξ β ξ β Ψ β Ψ β Ψ β Ψ β− −

+ += =

=

⎡ ⎤ ⎡ ⎤= − −⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦∑∑

I I (22)

Since ( );X t β and ( );tΨ β are orthogonal we will have

( ) ( )( )

( ) ( ) ( ) ( ) ( )1 1

1 1 1 11 1

, , , , ,

n m

m nn m m n m n m n m n

XX i j j i j i j i j ij i

E

R R t t R t t R t t R t t

β

ΨΨ ΨΨ ΨΨ ΨΨ

β β

ξ ξ− −

+ + + += =

=

⎡ ⎤= − − +⎢ ⎥⎣ ⎦∑∑

I I (23)

It can be shown that the above sum converges iff the conditions mentioned above holds. Some basic properties of the integral of a s.p. over an independent s.p. are as follows, Theorem 1.21 : Let ( );iX t β , t I∈ and ( );i tΨ β , t I∈ be orthogonal second-order stochastic

processes with the assumptions of Theorem 1.20. Then

a) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )1 1 2 1 1 1 2 2; ; ; ; ; ; ;I I I

a X t a X t d t a X t d t a X t d tβ β β β β β β+ = +∫ ∫ ∫Ψ Ψ Ψ

b) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )1 1 2 2 1 1 2 2; ; ; ; ; ; ;I I I

X t d a t a t a X t d t a X t d tβ β β β β β β+ = +∫ ∫ ∫Ψ Ψ Ψ Ψ

c) ( ) ( ) ( ) ( ); ; XI I

E X t d t m t dm tβ β β⎛ ⎞⎟⎜ ⎟⎜ =⎟⎜ ⎟⎟⎜⎝ ⎠∫ ∫ ΨΨ

d) Moreover if ( );X t β is differentiable with derivative ( );X t β′ , the following

integration by parts formula holds

( ) ( ) ( ) ( ) ( ) ( ); ; ; ; ; ;b

aI I

X t d t X t t X t t dtβ β β β β β′= −∫ ∫Ψ Ψ Ψ

Proof : Assertions a), b) and c) follow directly, by considering the sequence of eq. (21). For the proof of d) we refer to SOBCZYK, K., (Stochastic Differential Equations. p.111)

We will now study specifc integrals that come from different cases of orthogonal s.p.’s ( );X t β , t I∈ and ( );tΨ β , t I∈ .

I. Mean Square Integral of the s.p. ( );X t β with respect to the function ( )0 ,C Iψ∈

By setting ( ) ( );t tΨ β ψ≡ , t I∈ where ( )0 ,C Iψ∈ the integral (19) becomes

( ) ( ) ( );I

X t d tβ β ψ= ∫I (24)

By virtue of Theorem 1.20 the above integral exists iff

• ( )1 2,XXR t t is integrable on I I×

• ( )tψ has bounded variation on I

In this case eq. (20) takes the form,

Page 114: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

102 CHAPTER 4 STOCHASTIC CALCULUS – PRINCIPLES AND RESULTS

( ) ( ) ( ) ( ) ( )2

1 2 1 2; ,XXI I I

E X t d t R t t d t d tβ β ψ ψ ψ×

⎛ ⎞⎡ ⎤ ⎟⎜ ⎟⎜⎢ ⎥ ⎟=⎜ ⎟⎢ ⎥⎜ ⎟⎜ ⎟⎜⎢ ⎥⎣ ⎦⎝ ⎠∫ ∫∫ (25)

II. Riemann-Stieltjes Integral of the function ( )0 ,f C I∈ with respect to the s.p. ( );tΨ β

By setting ( ) ( );X t f tβ ≡ , t I∈ where ( )0 ,f C I∈ the integral (19) becomes

( ) ( ) ( );I

f t d tβ Ψ β= ∫I (26)

By virtue of Theorem 1.20 the above integral exists iff

• ( )f t is integrable on I

• ( )1 2,R t tΨΨ has bounded variation on I I×

In this case eq. (20) takes the form,

( ) ( ) ( ) ( ) ( )2

21 2 1 2; ,

I I I

E f t d t f t f t d R t tβΨΨβ

×

⎛ ⎞⎡ ⎤ ⎟⎜ ⎟⎜⎢ ⎥ ⎟=⎜ ⎟⎢ ⎥⎜ ⎟⎜ ⎟⎜⎢ ⎥⎣ ⎦⎝ ⎠∫ ∫∫Ψ (27)

III. Riemann-Stieltjes Integral of the s.p. ( );X t β with respect to the martingale ( );tΨ β

(Orthogonal Increment Process). We will study the form of the quantity ( )2

, 1 2,h w R t tΨΨΔ as , 0h w→ . We have

( ) ( ) ( ) ( ) ( )( )2, 1 2 1 1 2 2, ; ; ; ;h w R t t E t h t t w tβ

ΨΨΔ Ψ β Ψ β Ψ β Ψ β⎡ ⎤ ⎡ ⎤= + − ⋅ + −⎣ ⎦ ⎣ ⎦

For every 1 2t t≠ we can find appropriately small ,h w , such that

( ) ( ) ( ) ( )( )1 1 2 2; ; ; ; 0E t h t t w tβ Ψ β Ψ β Ψ β Ψ β⎡ ⎤ ⎡ ⎤+ − ⋅ + − =⎣ ⎦ ⎣ ⎦

On the other hand for 1 2t t= we will have

( ) ( ) ( ) ( ) ( )( )( )( ) ( )( )

( )( ) ( ) ( ) ( )

2,

2

min ,

, ; ; ; ;

min , ; ;

min ,

h w

h w

R t t E t h t t w t

E t h w t

F t h w F t F t

βΨΨ

β

Δ Ψ β Ψ β Ψ β Ψ β

Ψ β Ψ β

Δ

⎡ ⎤ ⎡ ⎤= + − ⋅ + − =⎣ ⎦ ⎣ ⎦

⎡ ⎤= + − =⎢ ⎥⎣ ⎦

= + − =

where the function is defined at Chapter 2/Section 2.2.1.c. So, the conditions for the existence of the integral are fulfilled iff

• ( ),XXR t t is integrable on I

• ( )F t has bounded variation on I

In this case eq. (20) takes the form,

Page 115: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

4.1. PROCESS OF 2ND ORDER – MEAN-SQUARE CALCULUS 103

( ) ( ) ( ) ( )2

; ; ,XXI I

E X t d t R t t dF tβ β β⎛ ⎞⎡ ⎤ ⎟⎜ ⎟⎜⎢ ⎥ ⎟=⎜ ⎟⎢ ⎥⎜ ⎟⎜ ⎟⎜⎢ ⎥⎣ ⎦⎝ ⎠∫ ∫Ψ (28)

4.1.6. M.S. Integration II–Integration over Martingales with non-anticipative dependence Another important case of stochastic integral is when the stochastic process is being integrated over martingales. One special case with various applications in Mechanics is the Ito Integral which, as we shall see, is the integration of a s.p. over a Wiener Measure. The essential feature of the construction of this integral is that the dependence of the stochastic process on the martingale should be non-anticipative; the stochastic process can depend at most, on the present and past values of the martingale process.

Let ( );tΨ β be a second order, martingale, ( )t ΩU - measurable. For every ,t s I∈ we define

the stochastic process ( ) ( ) ( ), ; ; ;t s t sη β Ψ β Ψ β= − . Using Prepositions 2.39-2.40/Chapter 2,

we clearly have

a) ( )( ), ; 0E t sβ η β = , for every ,t s I∈

b) ( )( ) ( ) ( )2

, ;E t s F t F sβ η β⎡ ⎤ = − <∞⎣ ⎦ , for every s t< with ,t s I∈

c) ( ) ( )( ), ; | 0E t sβτη β Ω =U , for every , ,t s Iτ ∈ with min ,t sτ < .

Let as denote by ( )2H I the class of stochastic processes ( );X t β , t I∈ satisfying the following

conditions,

a) ( );X t β is ( )t ΩU - measurable,

b) the integral ( )2

;I

X t dtβ⎡ ⎤⎣ ⎦∫ is finite with probability one.

Remark 1.22 : The last two conditions and the Preposition 2.40d/Chapter 2 , implies that for every Iτ ∈ the process ( );X τ β is independent of the increments ( ), ;t sη β for all ,t s I∈

such that min ,t sτ < .

Theorem 1.23 : Let ( ) ( )2;X H I∈i i . Then there is a sequence of simple random functions

( ) ( )2;nX H I∈i i such that

( ) ( )2

lim ; ; 0nnI

X t X t dtβ β→∞

⎡ ⎤− =⎢ ⎥⎣ ⎦∫ (29)

Proof : We will now introduce the stochastic integral over a martingale process in two stages. First we define it for a simple function. Let ( ) ( )2;X H I∈i i be a simple function, i.e.

( ) ( ); ;iX t X tβ β= for ( 1,i it t t +⎤∈ ⎦ where 1

ni i

t=

is a partition of I . In this case the above

integral is defined as follows:

( ) ( ) ( ) ( ) ( )1

11

; ; ; ; ;n

i i iiI

X t d t X t t tβ Ψ β β Ψ β Ψ β−

+=

⎡ ⎤= −⎢ ⎥⎣ ⎦∑∫ (30)

Page 116: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

104 CHAPTER 4 STOCHASTIC CALCULUS – PRINCIPLES AND RESULTS

Since for every function ( ) ( )2;X H I∈i i there is a convergent sequence of simple functions

( ) ( )2;nX H I∈i i we form the sequence

( ) ( ) ( ) ( ) ( )1

11

; ; ; ; ;n

n n n i i iiI

X t d t X t t tβ Ψ β β Ψ β Ψ β−

+=

⎡ ⎤= = −⎢ ⎥⎣ ⎦∑∫I (31)

Definition 1.24 : Let the stochastic process ( ) ( )2;X H I∈i i and ( ) ( )2;nX H I∈i i a sequence of simple functions. Let also nI the sequence of random variables of eq. (31). If

0

l.i.m. n Mnn→∞

Δ →

=I I (32)

exists, the random variable MI is called the integral of the stochastic process ( ) ( )2;X H I∈i i

over the martingale ( );tΨ β .

Remark 1.25 : We note that, for the above integral we use the values of the stochastic process at it . ). The value of the m.s. Integral of a stochastic process over a martingale process depends on the choice of the intermediate points iξ . To illustrate this dependence let consider the two integrals on the interval [ ],I a b=

( ) ( ) ( )1

,0 110

l.i.m. ; ; ;n

M i i in in

t t tΨ β Ψ β Ψ β−

+→∞=Δ →

⎡ ⎤= −⎢ ⎥⎣ ⎦∑I

( ) ( ) ( )1

,1 1 110

l.i.m. ; ; ;n

M i i in in

t t tΨ β Ψ β Ψ β−

+ +→∞=Δ →

⎡ ⎤= −⎢ ⎥⎣ ⎦∑I

Getting the difference of the two above integrals, we have

( ) ( ) ( ) ( )1 2 Levy Property

,1 ,0 110

l.i.m. ; ;n

M M i in in

t t F b F aΨ β Ψ β−

+→∞=Δ →

⎡ ⎤− = − ⎯⎯⎯⎯⎯→ −⎢ ⎥⎣ ⎦∑I I

Remark 1.26 : An alternative approach is also been used by some authors based on the convergence of the above limits with respect to probability instead of mean-square calculus. (SOIZE, C., The Fokker-Planck Equation and its Explicit Steady State Solutions).

Theorem 1.27 : Let ( ) ( )2;X H I∈i i , [ ],I a b= and ( );tΨ β a second-order, ( )t ΩU -

measurable, martingale with continuous characteristic ( )F t . Then the integral MI as defined

above exists and is unique, in the mean-square sense. Moreover

( ) ( ) ( ) ( )2

; ; ,XXI I

E X t d t R t t dF tβ β β⎛ ⎞⎡ ⎤ ⎟⎜ ⎟⎜⎢ ⎥ ⎟=⎜ ⎟⎢ ⎥⎜ ⎟⎜ ⎟⎜⎢ ⎥⎣ ⎦⎝ ⎠∫ ∫Ψ

Proof : By virtue of Theorem 1.6 ( ) ( ). .m sn β β⎯⎯→I I iff

( ) ( )( ) ( )( )2. .m sn mE Eβ ββ β β⎡ ⎤⎯⎯→ ⎣ ⎦I I I . Thus we have from eq.(31),

( ) ( )( )

( ) ( ) ( ) ( ) ( ) ( )( )1 1

1 11 1

; ; ; ; ; ;

n m

m nn m m m n ni j j j i i

j i

E

E X t X t t t t t

β

β

β β

β β Ψ β Ψ β Ψ β Ψ β− −

+ += =

=

⎡ ⎤ ⎡ ⎤= − −⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦∑∑

I I (33)

Page 117: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

4.1. PROCESS OF 2ND ORDER – MEAN-SQUARE CALCULUS 105

It is to be evaluated over the region [ ] [ ], ,a b a b× as indicated in the figure below. Each term in the sum is to be evaluated over a rectangle such as 1R or 2R . In evaluating these terms, we distinguish two types of rectangles: those containing a segment of the diagonal line n mt t= , such as 1R and those containing at most one point of the diagonal line, such as 2R .

R2

1R

ttt

tt

t n

n

i+1

n

i

m

j

m

j+1

m

For the second type, a typical situation is one where 1 1m m n nj j i it t t t+ +< ≤ < . This implies that

( ) ( )1 ; ;n ni it tΨ β Ψ β+

⎡ ⎤−⎢ ⎥⎣ ⎦ is independent of ( ) ( ); , ;n mi jX Xξ β ξ β and ( ) ( )1 ; ;m m

j jt tΨ β Ψ β+⎡ ⎤−⎢ ⎥⎣ ⎦ .

Hence

( ) ( ) ( ) ( ) ( ) ( )( )1 1; ; ; ; ; ; 0n m m m n ni j j j i iE X t X t t t t tβ β β Ψ β Ψ β Ψ β Ψ β+ +

⎡ ⎤ ⎡ ⎤− − =⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

Hence, the contributions of all the terms in eq. (33) over rectangles of the second type are zero. Turning now to rectangles of the first type, consider a typical situation where

1 1m n m nj i j it t t t+ +< < < . It is clear that we have

( ) ( ) ( ) ( ) ( ) ( )( )( ) ( ) ( ) ( )( )( ) ( )( ) ( ) ( )

1 1

2

1

1

; ; ; ; ; ;

; ; ; ;

; ;

n m m m n ni j j j i i

n m m ni j j i

n m m ni j j i

E X t X t t t t t

E X t X t t t

E X t X t F t F t

β

β

β

β β Ψ β Ψ β Ψ β Ψ β

β β Ψ β Ψ β

β β

+ +

+

+

⎡ ⎤ ⎡ ⎤− − =⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

⎡ ⎤= − =⎢ ⎥⎣ ⎦

⎡ ⎤= ⋅ −⎢ ⎥⎣ ⎦

Thus,

( ) ( )( ) ( ) ( )( ) ( ) ( )1 1

11 1

; ;m n

n m m nn m i j j i

j i

E E X t X t F t F tβ ββ β β β− −

+= =

⎡ ⎤= ⋅ −⎢ ⎥⎣ ⎦∑∑I I

So, we have the same summation as in case III of the above paragraph. Since ( )F t is

continuous, it has finite variation on I . Moreover ( ) ( )2;X H I∈i i , so ( ),XXR t t is integrable

on I .

Some basic properties of the integral of a s.p. over a martingale are as follows, Theorem 1.28 : Let ( ) ( )2;iX H I∈i i and ( );i tΨ β , t I∈ , [ ],I a b= , is a second-order, ( )t ΩU -

measurable, martingale with continuous characteristic ( )F t . Then

Page 118: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

106 CHAPTER 4 STOCHASTIC CALCULUS – PRINCIPLES AND RESULTS

a) ( ) ( ); ; 0I

X t d tβ β =∫ Ψ

b) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )1 1 2 1 1 1 2 2; ; ; ; ; ; ;I I I

a X t a X t d t a X t d t a X t d tβ β β β β β β+ = +∫ ∫ ∫Ψ Ψ Ψ

c) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )1 1 2 2 1 1 2 2; ; ; ; ; ; ;I I I

X t d a t a t a X t d t a X t d tβ β β β β β β+ = +∫ ∫ ∫Ψ Ψ Ψ Ψ

d) ( ) ( ) ( ) ( ) ( ) ( ); ; ; ; ; ;b

aI I

X t d t X t t X t t dtβ β β β β β′= −∫ ∫Ψ Ψ Ψ if ( );X i i is m.s.

differentiable.

e) [ ] ( ) ( ) ( ) ( ), ; ; ;a b t d tτ

α

χ Ψ β Ψ τ β Ψ α β= −∫

f) The integral MI of ( );iX t β over ( );tΨ β is also a martingale with characteristic

( ) ( ) ( ),XXG R t t dF tτ

α

τ = ∫

g) The sample functions of the s.p. ( ) ( ) ( ); ; ;MI X t d tτ

α

τ β β Ψ β= ∫ are continuous with

probability one.

Proof : Assertions a), b) and c) follow directly, by considering eq. (31). The proof of d) and e) can be found at SOBCZYK, K., (Stochastic Differential Equations. p.111 and 108 respectively). Proof of f) and g) can be found at GIΚHMAN, I.I. & SKOROΚHOD, A.V., (The Theory of Stochastic Processes III. p. 54 and 57 respectively.). An alternative proof of the last assertion could be based on direct application of the Kolmogorov Theorem with the use of mean value theorem for integrals.

Remark 1.29 : As we have mention at the beginning a very important case of stochastic integral, for applications, is the Ito Integral. For this case, the martingale is a standard Wiener process ( )t ΩU -measurable. The Ito Integral can be defined for every process ( ) ( )2;X H I∈i i ,

since the characteristic of the standard Wiener process ( ) ,F t t t I= ∈ , has bounded variation

on I . Of course all the properties derived above for the general case of a martingale process, holds for the Ito Integral. 4.1.6.a. A formula for changing variables (Ito Formula) In this section analogs and corollaries of the chain rule of differentiation of composite functions are studied in the case when differentiation is interpreted as the inverse operation of stochastic integration. The results obtained are of importance in what follows.

Theorem 1.30 [Ito Formula]: Let 1. ( )0ψ β be a second-order random variable,

2. ( );a t β , t I∈ be a second-order stochastic process that its correlation function

( )1 2,R t tαα has bounded variation on I I× ,

Page 119: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

4.1. PROCESS OF 2ND ORDER – MEAN-SQUARE CALCULUS 107

3. ( );tμ β , t I∈ be a second-order, martingale, ( )t ΩU - measurable with continuous

characteristic ( )F t , t I∈

4. ( ) ( ) ( ) ( )0; ; ;t a t tΨ β ψ β β μ β= + + , t I∈

We also assume that ( ),f x x ∈ is a twice continuously differentiable function. Then

( )( ) ( )( ) ( )( ) ( ) ( )( ) ( )2

0 2

1; ; ; ;2I I

df d ff t f s d s s dF sdx dx

Ψ β ψ β Ψ β Ψ β Ψ β= + +∫ ∫ (34)

Proof : For the proof we refer to GIΚHMAN, I.I. & SKOROΚHOD, A.V., (The Theory of Stochastic Processes III. p. 70).

Corollary 1.31 : If ( ), , ,f x t x t I∈ ∈ , is continuously differentiable with respect to t I∈ and

twice continuously differentiable with respect to x ∈ , then

( )( ) ( )( ) ( ) ( )0; , , ; ;f t t f t a t b tΨ β ψ β β β− = + (35)

where

( ) ( )( ) ( )( ) ( ) ( )( ) ( )2

2

1; ; , ; , ; ; ,2

I I I

f f fa t s t ds s s da s s s dF st x x

β Ψ β Ψ β β Ψ β∂ ∂ ∂

= + +∂ ∂ ∂∫ ∫ ∫

and

( ) ( )( ) ( ); ; , ;I

fb t s s d sx

β Ψ β μ β∂

=∂∫

Proof : For the proof we refer to GIΚHMAN, I.I. & SKOROΚHOD, A.V., (The Theory of Stochastic Processes III. p. 71).

Concerning the special case of Ito Integral, i.e. integration over a standard Wiener process we have the following result

Corollary 1.32 : Let ( ); ,W t t Iβ ∈ be a standard Wiener process, ( )t ΩU -measurable and

( ),f x x ∈ is a twice continuously differentiable function. Then

( )( ) ( ) ( )( ) ( ) ( )( )2

2

1; 0 ; ; ;2

I I

df d ff W t f W s dW s W s dsdx dx

β β β β= + +∫ ∫ (36)

Proof : Direct application of Theorem 1.30. 4.1.6.b Stochastic Differentials Definition 1.33 : Let

1. ( );a t β , [ ],t I a b∈ = be a second-order stochastic process that its correlation function

( )1 2,R t tαα has bounded variation on I I× ,

2. ( );tμ β , t I∈ be a second-order, martingale, ( )t ΩU - measurable with continuous

characteristic ( )F t , t I∈ .

Page 120: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

108 CHAPTER 4 STOCHASTIC CALCULUS – PRINCIPLES AND RESULTS

We say that a stochastic process ( );X t β , t I∈ possesses a stochastic differential (of a

continuous type)

( ) ( ) ( ) ( ) ( ) ( ) ( ); , , ; ; ; ;d X t da d t da t t d tβ ϕ ψ μ ϕ β β ψ β μ β= + = + , t I∈ (37)

where, ( ) ( )2; H Iψ ∈i i and the process ( );tϕ β is progressively measurable with respect to

( )t ΩU whose realizations are (with probability one) bounded functions iff

( ) ( ) ( ) ( ) ( ) ( ); 0; ; ; ; ;t t

a a

X t X s da s s d sβ β ϕ β β ψ β μ β= + +∫ ∫ (38)

for each and every t I∈ .

Remark 1.34 : We recall that a stochastic process ( );tϕ β , [ ],t I a b∈ = is progressively

measurable iff the restriction of the function ( );tϕ β to the set [ ],a s ×Ω is ( ) ( )s sI × ΩU U ,

i.e. if

( ) ( ) [ ] ( ) ( ), : ; , , s st t B t a s Iβ ϕ β ∈ ∈ ∈ × ΩU U (39)

for every ( )( )B D T∈U and s T∈ .

Clearly a process possessing a stochastic differential of continuous type has continuous modification. In what follows we shall consider just this kind of modification of the process ( ); ,t t IΨ β ∈ . Theorem 1.30 can be formulated using the concept of a stochastic differential in

the following manner. If process ( );tΨ β , t I∈ possesses stochastic differential ( ) ( ) ( ); ; ;d t da t d tΨ β β μ β= + and

the function ( ), , ,f x t x t I∈ ∈ , is continuously differentiable with respect to t I∈ and twice

continuously differentiable with respect to x ∈ then the process ( ) ( )( ); ; , ,X t f t t t Iβ Ψ β= ∈ also posses a stochastic differential and, moreover, for every

t I∈ ,

( ) ( )( ) ( )( ) ( )

( )( ) ( ) ( )( ) ( )

( )( ) ( )( ) ( ) ( )( ) ( )

2

2

2

2

; ; , ; , ;

1 ; , ; , ;2

1; , ; , ; ; ,2

f fdX t t t dt t t da tt x

f ft t dF t t t d tx x

f f ft t dt t t d t t t dF tt x x

β Ψ β Ψ β β

Ψ β Ψ β μ β

Ψ β Ψ β Ψ β Ψ β

∂ ∂= + +

∂ ∂∂ ∂

+ + =∂ ∂

∂ ∂ ∂= + +

∂ ∂ ∂

(40)

This implies

Theorem 1.35 : If the stochastic processes ( ); ,i tΨ β t I∈ (we write iΨ for simplicity) possess

stochastic differentials ( ) ( ) ( ); ; ;i i id t da t d tΨ β β μ β= + then

( )1 2 1 2d d dΨ Ψ Ψ Ψ+ = + (41)

( ) ( )1 2 1 2 2 1 12d d d dF tΨ Ψ Ψ Ψ Ψ Ψ⋅ = + + (42)

( ) ( )1 2 2 121 2 1 1 2

2 32 2 2

dF t dF td ddΨ ΨΨ Ψ Ψ −Ψ Ψ

Ψ Ψ Ψ

⎛ ⎞ −⎟⎜ ⎟= +⎜ ⎟⎜ ⎟⎜⎝ ⎠ (43)

Page 121: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

4.1. PROCESS OF 2ND ORDER – MEAN-SQUARE CALCULUS 109

( ) ( )212

u u ud e e u d e dF tΨ Ψ ΨΨ μ= + (44)

where ( )12F t is the joint characteristic of ( )1 ;tμ β and ( )2 ;tμ β , i.e.

( ) ( ) ( ) ( )1,212 1 2

12 SF t F t F t F t⎡ ⎤= − −⎢ ⎥⎣ ⎦

with ( )1,2SF t denoting the characteristic of the sum ( ) ( )1 2; ;t tμ β μ β+ .The formula for 1

2

d ΨΨ

⎛ ⎞⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜⎝ ⎠

is applicable provided ( )2 ; 0tΨ β δ≥ > .

Proof : The formulas follow directly from Theorem 1.30 when applied successively to

functions 11 2 1 2

2, , xx x x x x+ ⋅ and xue .

Page 122: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

110 CHAPTER 4 STOCHASTIC CALCULUS – PRINCIPLES AND RESULTS

4.2. Analytical Properties of Sample Functions

Second order properties of a stochastic process formulated by use of m.s.-convergence in terms of correlation functions do not provide, in general, sufficient information about the behavior of the realizations of a process. This fact can be illustrated by a Poisson process which – as we have seen in Section 2.2.1.c/Chapter 2 – is m.s.-continuous, but its sample functions are discontinuous. It is therefore of interest to throw some light on the analysis and the results associated with sample function properties. As we mention at the beginning of the chapter we assume the existence of a probability space ( )( )Ω , Ω ,U c and the stochastic process

( ):X D TΩ→ defined on it, with induced probability space ( ) ( )( ) ( )( ), , D TD T D TU c .

4.2.1. Sample Function Integration In the theory and applications we often deal with ordinary Lebesque Integral of realization of stochastic process. We, now, recall the definition of a measurable stochastic process. Definition 2.1 : A stochastic process ( );X t β , t I∈ is a measurable stochastic process iff is measurable with respect to the product measure ( ) ID I ×c c , where Ic is the Lebesque

measure on I . The mathematical base for sample function integration gives a theorem following directly from the known Fubini Theorem. Theorem 2.2 : Let ( );X t β , t I∈ be a measurable stochastic process. Then almost all

realizations of ( );X t β are measurable with respect to the Lebesque measure Ic on I . If

( )( );E X tβ β exists for t I∈ , then this average defines a Ic - measurable function of t I∈ . If

A is a measurable set on I and

( )( );A

E X t dtβ β <∞∫ (1)

then almost all realizations of ( );X t β are Lebesque integrable on A and

( ) ( )( ); ;A A

E X t dt E X t dtβ ββ β⎛ ⎞⎟⎜ ⎟⎜ =⎟⎜ ⎟⎟⎜⎝ ⎠∫ ∫ (2)

Proof : According to the assumption ( );X t β , t I∈ is a measurable stochastic process. So, the

Fubini Theorem implies that the ‘intersection’ ( );X t β is, for almost all β , a measurable

function of t I∈ . The Fubini Theorem also implies that if ( )( );E X tβ β exists, then it is a

measurable function of t. Since the average of ( );X t β is the integral

( )( ) ( ) ( ); ;E X t X t dβ β β βΩ

= ∫ c

Page 123: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

4.2. ANALYTICAL PROPERTIES OF SAMPLE FUNCTIONS 111

the assumption (1) asserts that the double integral (first – with respect to β , and then with respect to t) of ( );X t β is finite. This double integral taken in the opposite order is also finite;

therefore the integral

( );A

X t dtβ∫

is finite for almost all β ∈Ω . This means that almost all realizations of ( );X t β are Lebesque

integrable on I . The Fubini theorem implies that the value of a double integral is independent of the order of the integration; so we obtain (2). Let us note that the basics assumption of the above theorem requires the process ( );X t β to be

measurable. The following theorem gives a condition for an arbitrary stochastic process ( );X t β to be equivalent to measurable process ( );X t β .

Theorem 2.3 : If for almost all t (with respect to the measure Ic ) a stochastic process ( );X t β is continuous in probability then a measurable and separable stochastic process

( );X t β exists which is stochastically equivalent to ( );X t β .

Proof : See DOOB, J.L., 1953, Stochastic Processes. John Wiley &Sons.

Remark 2.4 : The condition of this theorem (continuity of ( );X t β in probability) is not very

restrictive. It is satisfied, for instance, by all processes whose correlation function is continuous.

Remark 2.5 : If ( )f t is a non-random Lebesque measurable function defined on I with

values in and ( );X t β is a measurable stochastic process, then the sample function integral

( ) ( );I

f t X t dtβ∫ (3)

exists and by virtue of Fubini theorem

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

( ) ( ) ( )

; ; ; ;

,

I I I I

XXI I

E f t X t dt f s X s ds E f t f s X s X t dtds

f t f s R t s dtds

β ββ β β β⎛ ⎞ ⎛ ⎞⎟ ⎟⎜ ⎜⎟ ⎟⎜ ⎜= =⎟ ⎟⎜ ⎜⎟ ⎟⎟ ⎟⎜ ⎜⎝ ⎠ ⎝ ⎠

=

∫ ∫ ∫ ∫

∫ ∫(4)

4.2.2. Sample Function Continuity We shall now study the conditions that a s.p. should impose, so that its realizations are continuous. The following theorem gives a sufficient condition for a s.p. to have continious realizations. Theorem 2.6 : Let ( );X t β , t I∈ be a separable stochastic process. If for all ,t t h+ in the

interval I

( ) ( ) ( )( ) ( ); ;X t h X t g h q hβ β+ − ≥ ≤c (5)

Page 124: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

112 CHAPTER 4 STOCHASTIC CALCULUS – PRINCIPLES AND RESULTS

where ( )g h and ( )q h are even functions of h , non-increasing as 0h → and such that

( )1

2 n

n

g∞

=

<∞∑ and ( )1

2 2n n

n

q∞

=

<∞∑ (6)

then almost all realizations of a process ( );X t β are continuous on I .

Proof : See LOÈVE, M., (Probability Theory II).

The above theorem and the Tchebycheff Inequality:

( )( )( )( )n

n

E XX k

k

β ββ ≥ ≤c

imply the following corollary which constitutes a useful sufficient condition for the validity of the theorem.

Corollary 2.7 [Kolmogorov Theorem] : Let ( );X t β , t I∈ be a separable stochastic process such that for certain number 0p> and for all ,t t h+ in the interval I

( ) ( )( ) ( ); ;p

E X t h X t hβ β β ρ+ − ≤ (7)

where a) ( ) 1 , 0, 0rh c h c rρ +

= > >

b) ( ) 1 ,log

r

c hh r p

hρ += >

then almost all realizations of a process ( );X t β are continuous on I .

Proof : After application of the Tchebycheff Inequality, condition (5) will be satisfied if

( )( )( )

p

hq h

g h

ρ=

⎡ ⎤⎣ ⎦

Taking in case a) ( ) , 0a rg h h ap

= < < one obtains ( ) 1 r apq h c h + −= . Taking the case b)

( ) log , 1 rg h hp

ββ

−= < < , one gets ( ) 1

logr p

hq hh

β+ −= . These expressions for ( )q h

agree with the requirements of the theorem.

Example 2.8 : Simple calculations show that for the Wiener process

( ) ( ) ( ) ( ) ( ) ( )21 1exp

22:

z

D I W D I W t W s x dt st sξ ξ

π−∞

⎧ ⎫⎪ ⎪∈ − < = − ⋅⎨ ⎬−− ⎪ ⎪⎩ ⎭∫ic

and

( ) ( )( ) ( )4 2

4 21; ; exp 322

E X t h X t d t st st s

β ξ ξβ β ξ

π

+∞

−∞

⎧ ⎫⎪ ⎪⎪ ⎪+ − = − ⋅ = −⎨ ⎬⎪ ⎪−− ⎪ ⎪⎩ ⎭∫

which means that,

Page 125: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

4.2. ANALYTICAL PROPERTIES OF SAMPLE FUNCTIONS 113

( ) ( )( )4 2; ; 3E X t h X t hβ β β+ − =

Therefore Kolmogorov condition a) is satisfied with constants 3c = , 4p = , 1r = . It proves that almost all realizations of the Wiener process are continuous on each finite interval.

If sample functions of a s.p. are discontinuous it is of interest to know whether the discontinuities are of the first kind only (finite jumps). The following theorem holds.

Theorem 2.9 [Chentsov] : Let ( );X t β , t I∈ be a separable stochastic process. If for

1 2 3t t t< < with 1 2 3, ,t t t I∈ and 3 1t t h− =

( ) ( ) ( ) ( )( ) 13 2 2 1; ; ; ;

p rE X t X t X t X t c hβ β β β β +⎡ ⎤ ⎡ ⎤− − ≤⎣ ⎦ ⎣ ⎦ (8)

where , ,p r c are positive constants, then almost all realizations of ( );X t β do not have

discontinuities the second kind.

Proof : CRAMER H. , LEADBETTER M. R., 1953, Stationary and Related Stochastic Process, Wiley NY.

Remark 2.10 : The conditions of the above theorem are, in particular, satisfied for a process with independent increments and such that for any ,t t h I+ ∈ ,

( ) ( )( )2; ;E X t h X t Ahβ β β+ − ≤ (9)

where A is a constant. It can be verified that Poisson process satisfies the condition. 4.2.3. Sample Function Differentiation We shall now state here a general theorem giving a sufficient condition in order that almost all realizations of a process are continuously differentiable. Theorem 2.11 : Let ( );X t β , t I∈ be a separable stochastic process. If for all ,t t h+ in the interval I the conditions of the Theorem 2.6 are satisfied and for all , ,t h t t h− + in the interval I

( ) ( ) ( ) ( )( ) ( )1 1; ; 2 ;X t h X t h X t g h q hβ β β+ + − − ≥ ≤c (10)

where ( )1g h and ( )1q h are even functions of h , non-increasing as 0h → and such that

( )11

2 2n n

n

g∞

=

<∞∑ and ( )11

2 2n n

n

q∞

=

<∞∑ (11)

then almost all realizations of a process ( );X t β have continuous derivatives in I .

Proof : See LOÈVE, M., (Probability Theory II).

Page 126: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

114 CHAPTER 4 STOCHASTIC CALCULUS – PRINCIPLES AND RESULTS

4.2.4. Relation to Second-Order properties For the sequel we assume that ( );X t β , t I∈ is a second-order stochastic process. We shall

now study the mutual connection between analytical properties of the sample functions of a process and its second-order properties.

Let us consider first integration of the process ( );X t β , t I∈ . In general, may happens, the

realizations of a m.s. integrable s.p. to be non-integrable. But, even if the realizations are integrable nevertheless one should differentiate the integrals,

( ) ( )m.s. ;I

X t dtβ= ∫I (12)

( )* ;I

X t dtβ= ∫I (13)

where I is a m.s. Riemann integral of ( );X t β , that is it is a random variable defined as a m.s.

limit of appropriate approximating integral sums and *I is a function making a correspondence between the elementary event β ∈Ω and a Lebesque integral of the corresponding realization. In general, without assumption on measurability of a process, the integral *I needs not to be a random variable. The following theorem shows the relation between the two integrals

Theorem 2.12 : Let ( );X t β , t I∈ be a measurable and m.s.-continuous stochastic process.

Then the integrals I and *I exist and they are equal with probability one.

Proof : The existence of integrals I and *I follows directly from the assumptions. To show the equality with probability one it is sufficient to show that

( )2* 0nnEβ →∞− ⎯⎯⎯→I I (14)

where nI denotes an approximating sum of I . Of course,

( ) [ ]( ) ( ) ( )2 22* * *2n n nE E E Eβ β β β⎡ ⎤− = + −⎢ ⎥⎣ ⎦I I I I I I

By virtue of eq. (1.25) [ ]( ) ( )2

1 2 1 2,n XXI I

E R t t dt dtβ

×

→ ∫∫I

Making use of (4) one has

( ) ( )2*

1 2 1 2,XXI I

E R t t dt dtβ

×

⎡ ⎤ =⎢ ⎥⎣ ⎦ ∫∫I

It can be easily shown that ( )*nEβ I I approach the same quantity equal to ( )2*Eβ ⎡ ⎤⎢ ⎥⎣ ⎦I an

n →∞ .

Remark 2.13 : It can only be assumed that a process ( );X t β , t I∈ is m.s.-continuous. Hence

by virtue of Theorem 2.3 process ( );X t β , t I∈ has stochastically equivalent measurable

representation.

Page 127: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

4.2. ANALYTICAL PROPERTIES OF SAMPLE FUNCTIONS 115

Let us consider now the questions associated with the continuity of a stochastic process ( );X t β , t I∈ .

Theorem 2.14 : Let ( );X t β , t I∈ be a separable second-order stochastic process, and there exists constants 0C > and 0r > such that, for every 1 2,t t I∈

( ) ( ) ( ) 12 2 1 1 1 2 2 1, , 2 , r

XX XX XXR t t R t t R t t C t t ++ − ≤ − (15)

then almost all realizations of ( );X t β , t I∈ are continuous on I .

Proof : The proof is direct consequence of Kolmogorov Theorem with 2p = .

Remark 2.15 : Actually it is sufficient for the correlation function to have a second order

derivative ( )2

1 21 2

,XXR t tt t∂

∂ ∂. The above conditions can be easily proved, using mean value

theorem, to be equivalent with condition (15). For the specific case of weakly stationary stochastic processes, there is the following

Theorem 2.16 : Let ( );X t β , t I∈ be a weakly stationary process, m.s.-differentiable. Then its

stochastically equivalent representation with continuous realizations exists.

Proof : Since ( );X t β , t I∈ is a weakly stationary, m.s.-differentiable, stochastic process, we

have ( ) ( )( )0XX XXR R τ C τ− ≤

Thus, the Kolmogorov condition a) takes the form ( )2p =

( ) ( )( )2; ;E X t τ X t C τβ β β+ − ≤

So, we have proved the assertion.

We finish this section by presenting analogous results for differentiation of realizations of a stochastic process ( );X t β , t I∈ .

Theorem 2.13 : Let ( );X t β , t I∈ be a separable second-order stochastic process, and there

exists a finite derivative ( )2 2

1 21 11 2

,n

XXn n R t tt t

+

+ +

∂∂ ∂

. Then almost all realizations of ( );X t β are n-

times differentiable.

Proof : See SOBCZYK, K., (Stochastic Differential Equations).

Page 128: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

116 CHAPTER 4 STOCHASTIC CALCULUS – PRINCIPLES AND RESULTS 4.3. References ASH, R.B., 2000, Probability and Measure Theory. Academic Press. ATHANASSOULIS, G.A., 2002, Stochastic Modeling and Forecasting of Ship Systems. Lecture

Notes NTUA. CRAMER H. , LEADBETTER M. R.,1953, Stationary and Related Stochastic Process, Wiley NY. DOOB, J.L., 1953, Stochastic Processes. John Wiley &Sons. FRISTEDT, B. & GRAY. L., 1997, A Modern Approach to Probability Theory. Birkhäuser GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1996, Theory of Random Processes. Dover Publications. GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1974, The Theory of Stochastic Processes I. Spinger. GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1975, The Theory of Stochastic Processes II. Spinger. GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1975, The Theory of Stochastic Processes III. Spinger. KLIMOV, G., 1986, Probability Theory and Mathematical Statistics. MIR Publishers. LOÈVE, M., 1977, Probability Theory II, Springer. PROHOROV, YU.V. & ROZANOV, YU.A., 1969, Probability Theory. Springer. PUGACHEV, V.S. & SINITSYN , 1987, Stochastic Differential Systems. John Wiley &Sons. SOBCZYK, K., 1991, Stochastic Differential Equations, Kluwer Academic Publishers. SOIZE, C., 1994, The Fokker-Planck Equation and its Explicit Steady State Solutions. World

Scientific. SOONG, T.T., 1973, Random Differential Equations. Academic Press. VULIKH, B.Z., 1976, A Brief Course in the Theory of Functions of a Real Variable. MIR

Publishers. YEH, J., 1995, Martingales and Stochastic Analysis. World Scientific.

Page 129: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

117

Chapter 5 Probabilistic Analysis of the responses of Dynamical Systems under Stochastic Excitation As we saw in Chapter 3 the characteristic functional gives a complete description of a stochastic process. In this chapter we shall see, how we can derive functional differential equations for the characteristic functional of a continuous time stochastic process whose sample functions satisfy a given Ordinary Differential Equation. This fact is of great importance, since we can formulate the full statistical problem into one equation. We must also note that, having this equation, all the statistical information of the problem can be recovered. Hence, form the functional differential equation we can, for example, derive the moments equations associated with the problem. The above formulation, although it contains the whole statistical information, it’s not yet of practical importance. The difficulty comes from the functional differential equation itself, since up to the present time very little has been done to solve this kind of equations. On the other hand, very few characteristic functions, can be generalized in infinite dimensional Hilbert spaces, in order to give representations of the characteristic functional. In Section 5.1 we introduce the notion of a stochastic differential equation and prove some general theorem concerning the existence and uniqness of solutions for equations of this kind. Both existence and uniqness is proved in the sense of probability measure. Moreover we study the necessary conditions for bounded of all moments of the probability measure that describes the system (ODE) response. Additionally we investigate the dependence of the solution of the system from a parameter inhered into the ODE. The analysis follows the steps of GIΚHMAN, I.I. & SKOROΚHOD, A.V., (The Theory of Stochastic Processes III). Section 5.2 deals with the formulation of functional differential equations for the joint characteristic functional that describes the joint probability structure of the excitation and the response. The formulation concerns stochastic systems described by nonlinear ODE’s with polynomial nonlinearities. We distinguish between the case of a m.s. continuous stochastic excitation and an orthogonal increment excitation. Before the general results we present two specific systems, the Duffing Oscillator and the Van der Poll Oscillator. In the next section 5.3 we prove how the general functional differential equation describing the probability structure of a system can be reduced to partial differential equations for the characteristic functions of various orders. Special cases of the above reduction are the general form of the Fokker-Planck equation as well as the Liouville equation. Of great importance are the partial differential equations proved at Section 5.3.4 that describes the probability structure of the system for the case of m.s. continuous excitation. In the last Section 5.5 we introduce the notion of the kernel characteristic functional that generalizes the corresponding idea of kernel density functions in infinite dimensional spaces. We prove that the Gaussian characteristic functional has the kernel property, based on existent

Page 130: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

118 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

theorems for extreme values of stochastic processes. In Section 5.4.4 we present how the kernel characteristic functionals can be used efficiently for the solution of functional differential equations. The basic idea is the localization in the physical phase space domain (probability measure) expressed in terms of the characteristic functional. We must note that the Gaussian measures are of great importance for these constructions since every analytical calculation concerning infinite dimensional integrals holds for Gaussian measures. In this way we derive a set of partial differential equations that governs the functional parameters of the kernel characteristic functionals. Finally in Section 5.4.5 we make special assumptions to derive a more simplified set of equations. Moreover a simple nonlinear ODE is studied and numerical results are presented.

Page 131: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.1. STOCHASTIC DIFFERENTIAL EQUATIONS – GENERAL THEORY 119

5.1. Stochastic Differential Equations – General Theory In the present section we introduce the notion of a stochastic differential equation and prove some general theorems concerning the existence and uniqness of solutions of these equations. For this purpose it is necessary to generalize the notion of a stochastic integral introduced above. The following analysis is based on the approach of GIΚHMAN, I.I. & SKOROΚHOD, A.V., (The Theory of Stochastic Processes III). 5.1.1. General Problems of the Theory of Stochastic Differential Equations. Assume that we are dealing with a motion of a system S in the phase space N and let ( ) ( ) ( ) ( )( )1 2, , , Nt x t x t x t= …x denote the location of this system in N at time t . Assume also

that a displacement of system Σ located at point x during the time interval ( ),t t tΔ+ can be

represented in the form,

( ) ( ) ( ) ( ), ,t t t A t t A tΔ Δ δ+ − = + − +x x x x (1)

Here, ( ),A tx is in general a random function; the difference ( ) ( ), ,A t t A tΔ+ −x x

characterizes the action of an “external field of forces” at point x on Σ during the time period ( ),t t tΔ+ and δ is a quantity which is of higher order of smallness in a certain sense than the

difference ( ) ( ), ,A t t A tΔ+ −x x . If ( ),A tx as a function of t is absolutely continuous, then

relation (1) can be replaced by the ordinary differential equation

( )

( )( ),t

d tA t t

dt′=

xx (2)

Equation (2) defines the motion of Σ in N for 0t t> under the initial condition ( )0 0t =x x ,

while ( ),tA t′ x determines the “velocity field” in the phase space at time t.

It is obvious that equation (2) cannot describe motions such as Brownian, i.e. motions which do not possess a finite velocity in the phase space or motions which possess discontinuities in the phase space. To obtain an equation which will describe a motion of systems of this kind it is expedient to replace relation (1) with an equation of an integral type. For this purpose we visualize that the time interval [ ]0 ,t t is subdivided into subintervals by the subdividing points

1 2, , , nt t t t=… . It then follows from (1) that

( ) ( ) ( )( ) ( )( )1 1

0 10 0

, ,n n

i i i i ii i

t t t t t t− −

+= =

− = − +∑ ∑x x A x A x δ

Since iδ are of a small order, it is natural to assume that, 1

0

0n

ii

=

→∑δ as n →∞ . In this case the

last equality formally becomes,

( ) ( ) ( )( )0

0 ,t

t

t t s s ds− = ∫x x A x (3)

Page 132: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

120 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

and the expression

( )( )0

,t

t

s s ds∫ A x

can be called a stochastic integral in the random field ( ), tA x along the random curve

( ) [ ]0, ,s s t t∈x ; the integral should be interpreted as the limit, in a certain sence, to be defined

more precisely, of sums of the form

( )( ) ( )( )1

10

, ,n

i i i ii

t t t t−

+=

−∑ A x A x

Relation (3) is called a Stochastic Differential Equation and is written in the form

( ) ( )( ) ( )0 0 0, , ,td t t t t t t= = ≥x A x x x

Under sufficiently general assumptions, for example, if ( ), tA x is a martingale for

each N∈x , one can assume that

( ) ( ) ( ), , ,t t t= +A x a x xβ (4)

where ( ), txβ as a function of t is a martingale and process ( ), ta x is representable as the

difference of two monotonically nondecreasing natural processes. In this connection it makes sense to suppose that the right-hand side of equation (3) can be represented according to formula (4) and introduce further restrictions on functions ( ), ta x and ( ), txβ in various ways.

For example, we may assume that the function ( ), ta x appearing in expression (4) is an

absolute continuous function of t while ( ), txβ as a function of t is a local square

integrable martingale. (Some more general assumptions concerning ( ), txβ are considered

below). In what follows, equation (3) will be written in the form

( ) ( ) ( )( ) ( )( )0 0

0 , ,t t

t t

t t s s ds s ds= + +∫ ∫x x a x xβ (5)

or, ( ) ( )( ) ( )( ) ( )0 0, , ,td t t t dt t dt t= + =x a x x x xβ

In the case when ( )( ), 0t dt =xβ , equation (5) is called an ordinary differential equation (with

a random right-hand side). Often fields ( ) ( ) ( ) ( )( )1 2, , , , , , ,Nt t t tβ β β= …x x x xβ of the form

( ) ( ) ( )10

, , , 1, ,t M

n nm mm

t s d s n Nβ γ μ=

= =∑∫ …x x (6)

are considered. Here ( ) , 1, ,m s m Mμ = … are local mutually orthogonal square integrable

martingales and ( ),nm sγ x are random functions satisfying conditions which assure the

Page 133: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.1. STOCHASTIC DIFFERENTIAL EQUATIONS – GENERAL THEORY 121

existence of corresponding integrals. In this case, the second integral in equation (5) may be defined as the vector-valued integral with components

( )( ) ( )( ) ( )0 0

1

, , , 1, ,t t M

n nm mmt t

s ds s s d s n Nβ γ μ=

= =∑∫ ∫ …x x

and the theory of stochastic integrals described in Section 1 of Chapter 4 can be utilized. However, if we confine ourselves to functions ( ), txβ of type (6) a substantial amount of

generality is lost. This can be seen from the fact that the joint characteristic of processes ( ),n tβ x and ( ),n tβ y defined by formula (6) is of the form

( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ), , ,10

, , , ,n n

t M

n n nm nm m mm

F t E t t s s E d s d sβ ββ β β β γ γ μ μ

=

≡ ⋅ = ⋅ =∑∫i ix y x y x y

( ) ( ) ( ) ( ) ( ),10

, ,m m

t M

nm nmm

s s dF sμ μγ γ=

= ∑∫ i ix y

while in the general case it is given by a function ( ), ,n tΓ x y which for fixed t is an arbitrary

nonnegative-defined kernel of arguments x and y,

( ), 1

, , 0ni j i j

i j

t z zΙ

Γ=

≥∑ x y for all iz ∈ , 1, 2, ,n N= … , 1, 2,I = …

For example (for simplicity we consider here the one-dimensional case) let functions, ( ) ( ), , , 1, , , 1, ,nm nmt c t n N m Mγ = = =… …x x be nonrandom and ( ) ( )m mt w tμ = be

independent Wiener process. In this case the correlation function ( ), ,R tx y of the field

( ) ( ) ( )10

, , , 1, ,t M

n nm mm

t c s dw s n Nβ=

= =∑∫ …x x

equals

( ) ( ) ( )( ) ( ) ( )10

, , , , , , , 1, ,t M

n n n nm nmm

R t E t t c s c s ds n Nβ β β=

= ⋅ = =∑∫ …x y x y x y

On the other hand, if we set ( ) ( ), ,n nt w tβ =x x , where ( ),nw tx is an arbitrary Gaussian field

with independent increments in t, its correlation function ( ) ( ) ( )( ), , , ,n n nR t E w t w tβ= ⋅x y x y is

then an arbitrary nonnegative-definite kernel (for a fixed t). Thus, the restrictions, when considering stochastic integrals along a process ( )tx imposed by

fields of the type (6) lead to a substantial narrowing of the class of problems under consideration. Therefore it is expedient to introduce a direct definition and investigate properties of the stochastic integral

( )( )0

,t

t

s dsβ∫ x

by interpreting it in the simplest case as the limit in probability of the sums

( ) ( )( ) ( )( )1 1 11

, ,n

k k k kk

s s s s− − −=

= −∑x x xσ β β

The sums σ are called integral sums.

Page 134: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

122 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

It is appropriate to observe that remarks about the insufficient generality of random fields as given by relation (6) are not fully justified in the case when the stochastic processes are represented by equation (5), with ( ) ( ), ,t t=a x xα being a nonrandom function and ( ), txβ

being a function with independent increments in t. Indeed, the increment ( )tΔx of a solution of

equation (5) at each time t depends on ( )tx and on the value of the field ( ), txβ at the point

( )t=x x and is independent of the nature of the relationship between ( ), txβ and ( ), tyβ at

the point ( )t≠y x (provided the probabilistic characteristics of the field ( ), txβ as a function

of x are sufficiently smooth). Therefore one may expect that solutions of equations (5) will be stochastically equivalent for any two fields ( ) ( )1, ,t t=x xβ β and ( ) ( )2, ,t t=x xβ β under the

condition that the joint distributions of the sequence of vectors

( ) ( ) ( )( )1 2, , , , , , ,Ni i i nt t t n +∀ ∈ ∀ ∈…x x x xβ β β

coincide for 1i = and 2i = and that the fields, ( ),i txβ possess independent increments in t.

For example, let ( ), tw x be an arbitrary Gaussian field possessing independent increments in t,

( ) ( ) ( )( ) ( ) , , , ,k j kjB t E w t w t B tβ= ⋅ =x x x x and the functions ( ),kjB tx be differentiable

with respect to t, ( ) ( ), ,kj kjdb t B tdt

=x x . Denote by ( ), tσ x a symmetric matrix such that

( ) ( )2 , ,t b tσ =x x and introduce independent Wiener process ( ), 1, ,jw t j N= … . Set

( ) ( ) ( )10

, ,t

t s d sσ= ∫x x wβ , ( ) ( ) ( ) ( )( )1 2, , , Nt w t w t w t= …w

Then

( ) ( )( ) ( ) ( ) ( )1 10 0

, , , , ,t t

E t t s s ds b s dsβ σ σ⊗ = =∫ ∫x x x x xβ β

and one can expect that the solutions of the differential equations,

( ) ( ), ,td t dt dt= +x a x w x ,

( ) ( ) ( ), ,t td t dt t d tσ= +x a x x w

will be stochastically equivalent, although the fields ( ), tw x and ( )1 , txβ in general are not.

Analogous observations can be made also in the case when ( )1 , txβ is an arbitrary field with

independent increments in t with finite moments of the second order. Assume that the increment ( ) ( ), ,t t tΔ+ −x xβ β possesses the characteristic function

( ) ( )( )( )exp , ,E i t t i tβ Δ⋅ + − ⋅ =z x z xβ β

( ) ( ) ( ) ( ), ,1exp , 1 , , ,2 N

t t t ti sT

t t

b s ds ds e i s s dΔ Δ

Π+ +

⋅⎛ ⎞⎟⎜ ⎡ ⎤ ⎟⎜− ⋅ ⋅ + − − ⋅ ⎟⎜ ⎢ ⎥ ⎟⎣ ⎦⎜ ⎟⎜⎝ ⎠

∫ ∫ ∫ x c x uz x z z c x u u

(if ( ), txβ possess finite moments of the second order and is absolutely continuous in t one

can reduce an arbitrary characteristic function of a process with independent increments to this

Page 135: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.1. STOCHASTIC DIFFERENTIAL EQUATIONS – GENERAL THEORY 123

form). In this case, one can expect for sufficiently smooth functions ( ), ta x , ( ),b tx and

( ), ,tc x u that solutions of the stochastic equations

( ) ( ), ,td t dt dt= +x a x xβ

and ( ) ( ) ( ) ( ), , , , ,

N

td t dt t dt t dt dσ ν= + +∫x a x x c x u u

will be stochastically equivalent. Here ( ), tσ x is a symmetric matrix, ( ) ( )2 , ,t b tσ =x x ,

( ),t Aν is a centered Poisson measure with ( ) ( )0

Var , ,t

t A s A dsν Π⎡ ⎤ =⎣ ⎦ ∫ .

It is also clear that if the increments in t of the field ( ), txβ are dependent then the remarks

above concerning the possibility of replacing the field ( ), txβ in equation (5) by a simpler field

without restricting the class of obtained solution are no longer valid. The preceding outline of a definition of a stochastic differential equation is expedient to extend in yet another direction. At present, "feedback" systems play an important part in a number of scientific-engineering problems. For such systems the "exterior field of forces" acting on the system at a given time depends not only on the instantaneous location of the system in the phase space but also on its phase trajectory in "the past":

( ) ( )0 0

, ,t t

tt tt t t t t tΔ Δ δ

⎛ ⎞ ⎛ ⎞⎟ ⎟⎜ ⎜+ − = + − +⎟ ⎟⎜ ⎜⎟ ⎟⎜ ⎜⎝ ⎠ ⎝ ⎠x x x xα α (7)

where 0

0, ,t

ts s t t

⎛ ⎞⎟⎜ ≥ >⎟⎜ ⎟⎜⎝ ⎠xα is a family of random functionals with values in N , defined on a

certain space of functions ( ) [ ]0, ,s s t t∈x with values in N . We now introduce the space NTD

of functions ( )sx defined on ( ], t−∞ , continuous from the right with values in N possessing

– at each point of the domain of definition – right-hand and left-hand limits and also the limit as s →∞ . Let

0

N Nt=D D and denote by tθ ( )t T≤ the mapping of N

tD into ND defined by the

relation

( )( ) ( ) , 0t s t s sθ = + ≤x x

Next let ( ) ( ), , ,t t ω=x xα α be a random function defined on [ ]0,N T× ×ΩD . Relation (7) can

be written as follows,

( ) ( ) ( ) ( ), ,t t tt t t t t t tΔ θ Δ θ Δ δ+ − = + − + +x x x xα α

and equation (5) can be represented by the equation

( ) ( ) ( )0 0

0 0, , ,t t

s st t

t s ds ds t tθ θ= + + >∫ ∫x x x xα β (8)

Here it becomes necessary to define the process ( )tx over the whole “past”, i.e., up to the time

0t . In this connection one should adjoin to equation (8) relation

( ) ( ) 0,pastt t t t= ≤x x (9)

Page 136: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

124 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

which will be called from now on the initial condition for the stochastic differential equation (8). 5.1.2. The Existence and Uniqueness Theorems of Stochastic Differential Equations Let the probability space ( )( )Ω , Ω ,U c which is equipped with a current of σ−algebras

( )t ΩU , [ ]0,t T∈ . Let also the random functions ( ), ta φ and ( ), tβ φ , with values in N ,

adopted to ( )t ΩU with N∈φ D , [ ]0,t T∈ be given. For what follows we will use the concept

of random time, so we have the following

Definition 1.1 : Let ( )t ΩU , [ ]0,t T∈ be a current of σ−algebras. A function ( )fτ ω= ,

τω ∈Ω ⊂Ω , with values in [ ]0,T is called a random time on ( )t ΩU , [ ]0,t T∈ or an

t −U random time, if ttτ ≤ ∈U .

Hence we can now give the definition of a solution of a stochastic differential equation.

Definition 1.2 : A random time τ , 0 Tτ< ≤ , on ( )t ΩU , [ ]0,t T∈ and a random process

( )tx , [ ]0,t T∈ , [ ]( )0,t T× −U U measurable and satisfying with probability 1 relations

( ) ( ) 0,pastt t t t= ≤x x (10)

( ) ( ) ( )0 0

0 0, , ,t t

s st t

t s ds ds t tθ θ= + + >∫ ∫x x x xα β (11)

for each t τ≤ , is called a solution of the stochastic differential equation

( ) ( ) 0, , ,t td t dt dt t tθ θ= + >x x xα β (12)

satisfying the initial condition

( ) ( ) 0,pastt t t t= ≤x x (13)

It is assumed that integrals in the r.h.s. of (11) are well defined, the first as a Lebesgue integral and the second as a stochastic integral. The random variable τ is called a lifetime of the process ( )tx (i.e., of a solution of a stochastic differential equation). Equation (12) is called

regular on [ ]0,T provided it possess a unique solution on the whole time interval [ ]0,T . (i.e., if

a unique solution of equation (12) with Tτ = exists).

We now introduce some general assumptions on functions ( ), ta φ and ( ), tβ φ under which

the r.h.s. of equality (11) will be well defined for a sufficiently wide class of processes ( )tx .

Observe that there is no point to consider the most general classes of processes ( )tx since the

r.h.s. of equation (11) represents a process which possesses a continuous modification or a modification with sample functions in N

TD and hence the process ( )tx must also possess these

properties. We shall first discuss the function ( ), ta φ . We introduce two sets of assumptions.

Page 137: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.1. STOCHASTIC DIFFERENTIAL EQUATIONS – GENERAL THEORY 125

α1 a) The function ( ) ( ), , ,s s ω=a aφ φ is defined on [ ]0,N T× ×ΩD and is

( ) [ ]( ) ( )0,NtT× × Ω −U U UD measurable for [ ]0,s T∈ .

α1 b) The function ( ) ( ), , ,s s ω=a aφ φ is defined on [ ]0,N T× ×ΩD and is

( ) [ ]( ) ( )0,NtT× × Ω −U U UD measurable for [ ]0,s T∈ .

α1 c) For a fixed ω the family of functions ( ) [ ] , , 0,t t T∈ia of arguments φ is

uniformly continuous on ND relative to metric NdD

.

α2 The function ( ), sa φ satisfies assumptions a1 provided ND , ( )NU D and

[ ]0,TD are replaced by NC , ( )NU C and [ ]0,TC respectively.

We say that the function ( ), tφa is linearly bounded (relative to a uniform norm or a seminorm

if in the succeeding inequalities the norm i can be replaced by the seminorm ∗i ) provided a

continuous monotonically nondecreasing process ( )0 tλ exists adopted to ( )t ΩU , [ ]0,t T∈

such that ( )0 tλ <∞ with probability 1 and

( ) ( ) ( )0, 1b b

a a

t dt t dtφ λ≤ +∫ ∫a φ (14)

We say that the function ( ), tφa satisfies the Local Lipschitz condition (in a uniform norm or

seminorm) if for any 0ν> there exists a monotonically ( )tνλ exists adopted to

( )t ΩU , [ ]0,t T∈ such that

( ) ( ) ( ) ( ), , 1b b

a a

t t dt t dtνλ− ≤ + −∫ ∫a aφ ψ φ ψ (15)

for all φ and ψ satisfying N≤φ and N≤ψ . If we choose the process which is

independent of ν in place of ( )tνλ then we shall refer to the process ( ), ta φ as one satisfying

the uniform Lipschitz condition. The class of processes ( ), ta φ which satisfy condition a2, (14) and (15) will be denoted by

( )0 ,ca νλ λS . We denote by ( )0 ,a νλ λS the class of processes which satisfy a1 and the

condition obtained from (14) and (15) when the uniform norm i is replaced by ∗i in the

corresponding inequalities. In the case when we shall be dealing with random functions ( ), ta φ

satisfying only one of the inequalities (14) or (15), for instance (14), we shall write ( ) ( )0, ,c

at λ∈ ia φ S and analogously in other cases.

Note that t tθ=φ ψ ( Nψ∈D , [ ]0,t T∈ ), with values in N is a Borel function in argument t .

Indeed if B is a cylinder in ND with the basis 1

n

ii

B B=

=∏ in coordinates ( )1 2, , , , 0n ks s s s ≤… ,

then

( ) 1

: :n

t i ii

t B t t s B=

∈ = + ∈∩φ φ

Page 138: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

126 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

Since the sets ( ) : i iz z B Z∈ =φ are Borel sets provided NiB ∈ are such, the set

1

:n

t i ii

t B Z s=

∈ = −∩φ , where Z s− denotes the set :z z s Z+ ∈ , will also be a Borel set.

Thus, if ( ),g tφ is a ( ) [ ]( )0,N T× −U UD measurable function in the arguments ( ), tφ where

( )NU D is the minimal σ−algebra generated by the cylinders in ND , then ( ),tg tθφ will be

a Borel function in argument t . Consequently if the field ( ), ta φ satisfies condition a1 or a2 then the integral

( ),b

ta

t dtθ∫ a ψ

exists with probability 1.

Next, if ( ) ( )0, ,at λ∈ ia φ S , then we have for 0 a b T≤ < ≤ ,

( ) ( ) ( )0, 1b b

t ta a

t dt t dtθ θ λ∗

≤ +∫ ∫a ψ ψ (16)

The proof follows easily from the fact that [ ], 0,t t Tθ ∈ψ is a continuous function with values

in ND with respect to a metric NdD

in ND , and ( ), ta φ is a continuous function of argument

φ (uniformly in t ). Analogously if ( ) ( )0, ,c

at λ∈ ia φ S , then

( ) ( ) ( )0, 1b b

t ta a

t dt t dtθ θ λ≤ +∫ ∫a ψ ψ (17)

If, however ( ) ( ), ,at νλ∈ ia φ S and 1 2 N∨ <ψ ψ , then

( ) ( ) ( ) ( )1 2 1 2, ,b b b

t t ta a a

t dt t dt t dtνθ θ θ λ∗

− ≤ −∫ ∫ ∫a aψ ψ ψ ψ (18)

and an analogous inequality is valid for the case ( ) ( ), ,cat νλ∈ ia φ S .

As far as the integral ( ),b

sa

dsθ∫ xβ is concerned, conditions for its existence and its properties

were discussed in the preceding chapter. We introduce the notation for the classes of fields ( ), tβ φ analogous to the notation for the

classes of functions ( ), ta φ introduced above. Namely we write ( ) ( )0, ,t β νλ λ∈β φ S (or

( ) ( )0, ,ct β νλ λ∈β φ S provided the field ( ), tβ φ satisfies conditions β1 and β2), is linearly

bounded, satisfies the local Lipschitz condition relative to a seminorm (norm) and the dominating processes ( )0 tΛ and ( )tνΛ are absolutely continuous with ( ) ( )0 0t tλ Λ ′= and

( ) ( )t tν νλ = Λ ′ .

Page 139: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.1. STOCHASTIC DIFFERENTIAL EQUATIONS – GENERAL THEORY 127

Set

( ) ( ) ( )0

, , ,t

t

t s ds t= +∫A aφ φ β φ

and write ( ) ( )0, ,t νλ λ∈A φ S ( ( ) ( )0, ,ct νλ λ∈A φ S ) provided

( ) ( )0, ,at νλ λ∈a φ S and ( ) ( )0, ,t β νλ λ∈β φ S

( ) ( )0, ,cat νλ λ∈a φ S and ( ) ( )0, ,ct β νλ λ∈β φ S

Let the process ( )tx , [ ]0,t T∈ , be a adopted to a current of σ−algebras ( )t ΩU , [ ]0,t T∈

with sample functions belonging with probability 1 to [ ]0,N TD . We complete the definition of

( )tx for 0t ≤ by setting ( ) ( )t t=x φ for 0t t≤ where ( )tφ is a given function in ND .

Assume that ( ), ta φ satisfies condition α1 a) and that ( ) ( ), ,t β νλ∈ iβ φ S . In what follows

these condition will always be assumed to be valid. We now formulate the first basic theorems for the existence and uniqness of a solution in accordance with the operator ( ), tA φ .

Theorem 1.3 : Assume that ( ) ( )0, ,t νλ λ∈A φ S (or that ( ) ( )0, ,ct νλ λ∈A φ S ). The stochastic

differential equation (12) possesses in H ∗ ( cH ) a unique solution which is defined for all [ ]0,t T∈ .

Proof : For the proof see GIΚHMAN, I.I. & SKOROΚHOD, A.V., (The Theory of Stochastic Processes III. p.141)

Theorem 1.4 : If ( ) ( ), ,t νλ∈ iA φ S (or ( ) ( ), ,ct νλ∈ iA φ S ) then a random time τ∞ exists on

( )t ΩU , [ ]0,t T∈ and a random process ( )tx adopted to ( )t ΩU , [ ]0,t T∈ defined for t τ∞<

such that ( )tx satisfies (12) for all t τ∞< , and the sample functions of ( )tx possess with probability 1 left – hand limits and are continuous from the right for all t τ∞< . Moreover

( )0 1τ∞ > =c .

Proof : For the proof see GIΚHMAN, I.I. & SKOROΚHOD, A.V., (The Theory of Stochastic Processes III. p.142)

Theorem 1.5 : If under the condition of Theorem 1.3 one can set 0 Cλ = , then the solution of equation (12) belongs to 2H ∗ ( 2

cH ) and

[ ]

( ) ( )2 2

0,sup 1 ,t T

E t Bβ

⎛ ⎞⎟⎜ ⎟≤ +⎜ ⎟⎜ ⎟⎜⎝ ⎠x φ (19)

[ ]

( ) ( ) ( )2 2

,sup 1

s t t hE s t B hβ

∈ +

⎛ ⎞⎟⎜ ⎟− ≤ +⎜ ⎟⎜ ⎟⎜⎝ ⎠x x φ (20)

where B is a constant which depends on C , K and T only. For the proof see GIΚHMAN, I.I. & SKOROΚHOD, A.V., (The Theory of Stochastic Processes III. p.142)

Page 140: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

128 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

5.1.3. Bounds on the Moments of Solutions of Stochastic Differential Equations A very important issue for a solution of a stochastic differential equation is whether it possesses or not finite order moments. Based on theorems for the existence of moments, we can solve moment equations and have numerical results for the stochastic differential equation. The theorem presented below guarantees the existence of all moments for a very wide class of systems including Ito equations as well as equations with continuous excitation process.

Consider equation (12) - (13)

( ) ( ) 0, , ,t td t dt dt t tθ θ= + >x x xα β

( ) ( ) 0,pastt t t t= ≤x x

satisfying the conditions of Theorem 1.3. Assume first that for a fixed φ , ( ), tβ φ is a

martingale process with continuous sample functions with probability 1. (GIΚHMAN, I.I. & SKOROΚHOD, A.V., The Theory of Stochastic Processes III. p.37, 144). Furthermore let the characteristic ( )kF t , of the process ( ),k tβ φ be absolutely continuous with respect to the

Lebesgue measure

( ) ( )0

,t

kkF t s dsβ= ∫ φ (21)

and

( ) ( )( )2, 1k s sβ λ≤ +φ φ (22)

For the existence of moments of the solution of an SDE we have the following

Theorem 1.6 : A process ( )tx satisfying equation (12) - (13) and conditions (21) - (22) for

( ) inf :Nt t t Nτ λ≤ = > possess moments of all orders.

Proof : For the proof see GIΚHMAN, I.I. & SKOROΚHOD, A.V., (The Theory of Stochastic Processes III. p.144) Corollary 1.7 : If

( ) ( ), 1ka t C≤ +φ φ (23)

( ) ( )( ) ( )2, | 1ktE t C tβ Δ Ω ≤ + ΔUφ φ (24)

where C is an absolute constant, then the solution of equation (12) - (13) possess finite moments of all orders for [ ]0 ,t t T∈ .

More extended results concerning the existence of all or of finite number moments and the corresponding conditions can be found at GIΚHMAN, I.I. & SKOROΚHOD, A.V., (The Theory of Stochastic Processes III. p.145). Moreover we must emphasize the issue of moment stabilities although we will not study it in the present work. For a detailed introductory discussion we refer to SOONG, T.T., (Random Differential Equations, p.256). Some interesting results for special cases of stochastic differential equations as well as stochastic partial differential equation can be found at the paper of KOTULSKI, Z. & SOBCZYK, K., (On the Moment Stability of Vibratory Systems with Random Parametric Impulsive Excitation). Another approach based

Page 141: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.1. STOCHASTIC DIFFERENTIAL EQUATIONS – GENERAL THEORY 129

on the Fourier transform of the moment equations with respect to time have been studied. The transformed moments have been studied extensively in homogeneous turbulence theory, and we refer to BATCHELOR, G.K., (Theory of Homogeneous Turbulence) for a thorough discussion of the single-time equation. 5.1.4. Continuous dependence on a parameter of solutions of stochastic equations Consider the equation of the form

( ) ( ) ( )0

0, ,t

u u u s ut

t t s ds t tθ= + ≥∫x Aη η (25)

( ) ( ) 0,u pastt t t t= <xη (26)

where u is a scalar parameter, [ ]00,u u∈ , the field

( ) ( ) ( )0

, , ,t

u u ut

t s ds t= +∫A aψ ψ β ψ

and the function ( )u tx both depend on the parameter u and the initial condition

( )( )0,past t t t<x does not depend on u .

Theorem 1.8 : Assume that ( ) ( ), ,u t C C∈A x S and, moreover, let

a) ( )( )0

2sup u

t t TE t C

≤ ≤≤x

b) ( ) ( )( ) [ ]0

20 00

lim sup 0 ,uu t t TE t t t t T

→ ≤ ≤− = ∀ ∈x x

c) ( ) ( ) ( )( ) ( ) ( )2

0, , | , |t t

u t u tt

E t t E s ds+Δ⎛ ⎞⎟⎜ ⎟⎜Δ −Δ Ω ≤ Ω ⎟⎜ ⎟⎜ ⎟⎝ ⎠∫U Uβ ψ β ψ γ ψ

and, for arbitrary 0N > and [ ]0 ,t t T∈ , let

( ) ( ) ( )00lim sup , , , 0u uu N

t t t ε→ ≤

⎧ ⎫⎪ ⎪⎪ ⎪− + > =⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭c a a

ψψ ψ γ ψ

Then

( ) ( )( )0

200

lim sup 0uu t t TE t t

→ ≤ ≤− =η η

Proof : The variables ( )u tη possess finite moments of the second order since equations (25) -

(26) satisfy the conditions of Theorem 1.6. We represent the difference ( ) ( )0u t t−η η in the

form

( ) ( ) ( ) ( ) ( )0 0

0 0, ,t t

u u u s u u st t

t t t s ds s dsθ θ− = + −∫ ∫A Aη η σ η η ,

where,

( ) ( ) ( ) ( ) ( )0 0

0 0 0 0, ,t t

u u u s st t

t t t s ds s dsθ θ= − + −∫ ∫x x A Aσ η η

Page 142: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

130 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

It is easy to verify that

( ) ( )( ) ( )( ) ( ) ( )0

2 2 220 03 3 1

t

u u s ut

E t t E t C T E dsθ∗

⎛ ⎞⎟⎜ ⎟⎜− ≤ + + − =⎟⎜ ⎟⎜ ⎟⎟⎜⎝ ⎠∫η η σ η η

( )( ) ( ) ( )( ) ( )0

0

2 203

tt

u ut s

E t C E s u s u K du ds−

′= + + + + =∫ ∫σ η η

Set ( ) ( ) ( )( )0

20supu u

t s tt E s s

≤ ≤= −υ η η . It follows from the last inequality that

( ) ( )( ) ( )0

23

t

u u ut

t E t s ds≤ +∫υ σ υ

Then it can be easily proved that

( ) ( )( )0

2supu u

t s tt C E s

≤ ≤

′′≤υ σ

where C ′′ does not depend on u . Furthermore

( )( ) ( ) ( )( )0 0

2 20sup 3 supu u

t s t t t TE s E t t

≤ ≤ ≤ ≤

⎧⎪⎪≤ − +⎨⎪⎪⎩x xσ

( ) ( )0

2

0 0 0, ,T

u s st

E s s dsθ θ⎛ ⎞⎛ ⎞ ⎟⎜ ⎟⎜ ⎟⎜ ⎟ ⎟⎜⎜+ − +⎟ ⎟⎜⎜ ⎟ ⎟⎜⎜ ⎟ ⎟⎟⎜⎝ ⎠⎜ ⎟⎝ ⎠∫ a aη η

( ) ( )0

0 0

2

0 0 0sup , ,t t

u s st t T t t

E ds dsθ θ≤ ≤

⎛ ⎞⎟⎜ ⎟⎜ ⎟⎜+ − ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠∫ ∫β η β η

( )1 2 33 I I I= + + By the condition, the quantity 1 0I → . Next

( ) ( )0

2

2 0 0 0, ,t

u s st

I TE s s dsθ θ⎛ ⎞⎟⎜ ⎟⎜≤ − ⎟⎜ ⎟⎜ ⎟⎟⎜⎝ ⎠∫ a aη η

where the integrand is dominated by the quantity

( ) ( )0

2204 1

t

C s u K du−∞

⎛ ⎞⎟⎜ ⎟⎜ + + ⎟⎜ ⎟⎜ ⎟⎜⎝ ⎠∫ η

which is independent of u and is integrable with respect to the measure d ds×c . On the other hand, ( ) ( )0 0 0, , 0u s ss sθ θ− →a aη η

in probability for each s and hence in measure d ds×c . Therefore 2 0I → as 0u → . Finally

( ) ( ) ( )0 0 0

2

3 0 0 0 04 , , 4 ,t t t

u s s u st t t

I E ds ds E s dsθ θ θ⎛ ⎞ ⎛ ⎞⎟⎜ ⎟⎜⎟⎜ ⎟⎟ ⎜⎜≤ − = ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎟⎜⎝ ⎠⎜ ⎟⎝ ⎠∫ ∫ ∫β η β η γ η

and as in the case of quantity 2I it is easy to verify 3 0I → as 0u → .

Remark 1.9 : We shall strengthen the assumption of Theorem 1.8 and assume that in addition to the conditions stipulated in the theorem

Page 143: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.1. STOCHASTIC DIFFERENTIAL EQUATIONS – GENERAL THEORY 131

( ) ( )0

200

lim sup 0uu t t TE t t

→ ≤ ≤

⎛ ⎞⎟⎜ − =⎟⎜ ⎟⎟⎜⎝ ⎠x x

In this case

( ) ( )0

200

lim sup 0uu t t TE t t

→ ≤ ≤

⎛ ⎞⎟⎜ − =⎟⎜ ⎟⎟⎜⎝ ⎠η η (27)

The proof of this assertion is analogous to the proof of Theorem 1.8.

Theorem 1.10 : Consider the stochastic equation.

( ) ( ) ( ) [ ]( )0 0, , , 0,u u s u u pastd dt t t t t u uθ= = ≤ ∈x A x x x

satisfying the conditions of Theorem 1.3 and for all 0N > let

( ) ( )0 0

0lim sup sup sup 0u uNp u t t T t t T

t p t pλ λ→∞ ≤ ≤ ≤ ≤

⎡ ⎤⎧ ⎫ ⎧ ⎫⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪⎢ ⎥≥ + ≥ =⎨ ⎬ ⎨ ⎬⎢ ⎥⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪⎩ ⎭ ⎩ ⎭⎣ ⎦c c

and also let condition c) of Theorem 1.8 be satisfied. Then, for any 0ε>

( ) ( )0

0sup 0ut t T

t t ε≤ ≤

⎧ ⎫⎪ ⎪⎪ ⎪− > →⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭c x x as 0u → .

Proof : Let ( ) ( ) ( ) ( )0inf : , , infu u

p N ut t p t p t N Tτ λ λ= ≥ ≥ ≥ ∅=x

( ) ( ), ,pu ut t=a aφ φ for pt τ<

( ), 0pu t =a φ for pt τ≥ , ( ) ( ), ,p

u u pt t τ= ∧β φ β φ

( ) ( ) ( )0

, , ,t

p p pu u u

t

t s ds t= +∫A aφ φ β φ

Theorem 1.8 (or the corresponding remarks of this theorem) is applicable to equation

( ) ( ) ( ) 0, , ,p p pu u s u u pastd dt t t t tθ= = ≤x A x x x

On the other hand, in view of Theorem 1.3 ( ) ( )pu ut t=x x with probability 1 for all pt τ< .

Therefore for any 0ε>

( ) ( ) 0

sup pu u p

t t Tt t Tε τ

≤ ≤

⎧ ⎫⎪ ⎪⎪ ⎪− > ≤ <⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭c cx x

Furthermore,

( ) ( ) ( ) ( )0 0

0sup sup3

pu u u

t t T t t Tt t t t ε

ε≤ ≤ ≤ ≤

⎧ ⎫ ⎧ ⎫⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪− > ≤ − > +⎨ ⎬ ⎨ ⎬⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪⎩ ⎭ ⎩ ⎭c cx x x x

( ) ( ) ( ) ( )0 0

0sup sup3 3

p p pu u

t t T t t Tt t t tε ε

≤ ≤ ≤ ≤

⎧ ⎫ ⎧ ⎫⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪+ − > + − > ≤⎨ ⎬ ⎨ ⎬⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪⎩ ⎭ ⎩ ⎭c cx x x x

( ) ( ) 0

0sup 23

p pu p

t t Tt t Tε

τ≤ ≤

⎧ ⎫⎪ ⎪⎪ ⎪≤ − > + <⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭c cx x .

Page 144: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

132 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

The uniform stochastic boundness of the processes ( )0u tλ and ( )u

N tλ and Theorem 1.8 imply

that one can first choose a sufficiently large value of p and N such that / 3p Tτ ε< <c

and then choose a 0δ> such that ( ) ( )0

0sup3

p pu

t t Tt t ε

≤ ≤

⎧ ⎫⎪ ⎪⎪ ⎪− >⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭c x x for [ ]0,u δ∈

( ) ( )0

0sup ut t T

t t ε ε≤ ≤

⎧ ⎫⎪ ⎪⎪ ⎪− > <⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭c x x .

Page 145: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.2. HOPF-TYPE FDEs FOR THE CHARACTERISTIC FUNCTIONAL OF DYNAMICAL SYSTEMS 133

5.2. Hopf-type Functional Differential Equation (FDE) for the Characteristic Functional of Dynamical Systems In this section we will derive the Functional Differential Equation for a wide class of autonomous stochastic dynamical systems with external forcing. To begin our discussion let us consider the special cases of a Duffing Oscillator and a van der Poll oscillator. Let

( )( )Ω , Ω ,U c be a probability space and ( )cC I∞ be the separable, real, inner product

( 2_ inner product) space of infinitely differentiable functions with values in U ⊂ . ( )( )cC I∞U will denote the σ− field generated by the open subsets of ( )cC I∞ .

5.2.1. Duffing oscillator The Duffing oscillator has the following equation of motion

( ) ( ) ( ) ( ) ( )3; ; ; ; ;mx t cx t k x t ax t y tβ β β β β+ + + = (1)

( ) ( ) ( ) ( )0 0 0 0; , ; ,x t x t x xβ β β β⎡ ⎤ ⎡ ⎤⎣ ⎦ ⎣ ⎦= a.s. (2)

We assume that ( );y t β is a stochastic process defined onto the probability space

( )( )Ω , Ω ,U c with the associated probability space ( ) ( )( )( ), ,c cC I C I∞ ∞U c y , whose

characteristic functional is ( )vyY . Using the results of the previous section we can conclude

that there is a unique solution for the SDE (1)-(2). We desire to calculate the joint characteristic functional ( ),u vxyY . We first note that

( ) ( ), exp , , ,c cC C

u v i x u i y v dx dy∞ ∞

= +∫ ∫xy xyY c (3)

where the integrals are over the infinite dimensional space cC∞ . The brackets ,i i denotes the

inner product of the Hilbert space 2_ , i.e. ( ) ( ),I

x y x t y t dt= ∫ . The Volterra derivative of

( ),u vxyY with respect to ( )u ξ will be denoted as ( )( )

,u vuδ ξ

δ xyY or simply

( ),u vuξδ

δ xyY. Direct

calculations shows

( )

( ) ( ),

exp , , ,c cC C

u vix i x u i y v dx dy

ξδ

δ∞ ∞

= +∫ ∫xyxy

Yc (4)

Taking the nth ordinary derivative of ( ),u v

uξδ

δ xyY with respect to ξ we will have

( ) ( ) ( )

,exp , , ,

c c

nn

n nC C

u v d xd i i x u i y v dx dyd u dξ

ξξ δ ξ

δ∞ ∞

= +∫ ∫xyxy

Yc (5)

Additionally we note that the third Volterra derivative of ( ),u vxyY with respect to

( ) ( ) ( )1 2 3, ,u u uξ ξ ξ is

Page 146: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

134 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( )

( ) ( ) ( ) ( )1 2 3

1 2 3

,exp , , ,

c cC C

u vix ix ix i x u i y v dx dy

u u uξ ξ ξ

ξ ξ ξδ δ δ

δ∞ ∞

= +∫ ∫xyxy

Yc (6)

Setting 1 2 3ξ ξ ξ ξ= = = we will have

( )

( ) ( )3

3

3

,exp , , ,

c cC C

u vix i x u i y v dx dy

ξδ

δ∞ ∞

⎡ ⎤= +⎣ ⎦∫ ∫xyxy

Yc (7)

Thus, we have

( ) ( ) ( ) ( )

( ) ( )( ) ( ) ( )

32

32 3

23

2

, , , ,

exp , , ,c cC C

u v u v u v u vm d c d k ai d u i d u i u i u

d x dxm c kx ax i x u i y v dx dy

d d

ξ ξ ξ ξξ δ ξ δ δ δ

ξ ξξ ξ

ξ ξ

δ δ δ δ

∞ ∞

⋅ ⋅ + ⋅ ⋅ + ⋅ + ⋅ =⎡ ⎤⎣ ⎦

⎧ ⎫⎪ ⎪⎪ ⎪= + + + +⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∫ ∫

xy xy xy xy

xy

Y Y Y Y

c

(8)

Using the equation (1) governing the system we will have

( ) ( ) ( ) ( )

( ) ( )

32

32 3

, , , ,

exp , , ,c cC C

u v u v u v u vm d c d k ai d u i d u i u i u

y i x u i y v dx dy

ξ ξ ξ ξξ δ ξ δ δ δ

ξ

δ δ δ δ

∞ ∞

⋅ ⋅ + ⋅ ⋅ + ⋅ + ⋅ =⎡ ⎤⎣ ⎦

= +∫ ∫

xy xy xy xy

xy

Y Y Y Y

c (9)

Hence, the Functional Differential equation for the Duffing Oscillator will have the form

( ) ( ) ( )

( ) ( )

2

2

3

33

, , ,

, ,1

u v u v u vm d c d ki d u i d u i u

u v u vai i vu

ξ ξ ξ

ξξ

ξ δ ξ δ δ

δδ

δ δ δ

δ δ

⋅ ⋅ + ⋅ ⋅ + ⋅ +

+ ⋅ =⎡ ⎤⎣ ⎦

xy xy xy

xy xy

Y Y Y

Y Y (10)

Concerning the initial conditions of the problem we note that

( ) ( )( )( ) ( ) ( )

( ) ( )( ) ( ) ( )

0 0

0 0

1 2

1 2

1 0 2 0 0 1 2

,0

exp , ,0 ,

exp ,c c

c

t t

t t

C C

C

i x i y dx dy

i x t x t dx

υ δ υ δ

υ δ υ δ

υ υ υ υ

∞ ∞

′+ =

′= + + =

= + =

∫ ∫

i i

i i

xy

xy

x

Y

c

c Y

(11)

Page 147: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.2. HOPF-TYPE FDEs FOR THE CHARACTERISTIC FUNCTIONAL OF DYNAMICAL SYSTEMS 135

5.2.2. Van der Poll Oscillator The van der Pol oscillator has the following equation of motion

( ) ( )( ) ( ) ( ) ( )2; ; 1 ; ; ;mx t c x t x t k x t y tβ β β β β+ − + = (12)

( ) ( )0 0;x t xβ β= and ( ) ( )0 0;x t xβ β= a.s. (13)

Assuming that ( );y t β is a stochastic process defined onto the probability space

( )( )Ω , Ω ,U c with the associated probability space ( )( ), ,c cC C∞ ∞U c y , there is a unique

joint Characteristic Functional

( ) ( ), exp , , ,c cC C

u v i x u i y v dx dy∞ ∞

= +∫ ∫xy xyY c

Similar calculations on the characteristic functional, as before, shows that

( )( )

( )( ) ( )

( )( )

( )( )

( )( )( ) ( )

( ) ( )

32

22 3

22 , ,

2

, , , ,

1 ,c c

t

i x u i y v

C C

u v u v u v u vm d c d c d ki d u i dt i d u uu t u

d x dxm c x kx e dx dy

d d

ξ

δ δ δ δ

ξ δ ξ ξ δ ξ δ ξδ δ ξ

ξ ξξ ξ

ξ ξ∞ ∞

=

+

×

⋅ + ⋅ − + ⋅ =

⎧ ⎫⎪ ⎪⎪ ⎪= + − +⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∫

xy xy xy xy

xy

Y Y Y Y

c

(14)

By using the equation (12) we will have

( )( )

( )( ) ( )

( )( )

( )( )

( ) ( )

32

22 3

, ,

, , ,

,,

c c

t

i x u i y v

C C

u v u v u vm d c d c di d u i dt i d uu t u

u vk y e dx dy

u

ξ

δ δ δ

ξ δ ξ ξ δ ξδ δ ξ

δξ

δ ξ ∞ ∞

=

+

×

+ ⋅ − +

+ ⋅ = ∫

xy xy xy

xyxy

Y Y Y

Yc

(15)

Hence, the Functional Differential equation for the Duffing Oscillator will have the form

( )( )

( )( ) ( )

( )( )

( )( )

( )

32

22 3

, , , ,

,1t

u v u v u v u vm d c d c d ki d u i dt i d u uu t u

u vi v

ξ

ξ

δ δ δ δ

ξ δ ξ ξ δ ξ δ ξδ δ ξ

δ

δ=

⋅ + ⋅ − ⋅ + ⋅ =

=

xy xy xy xy

xy

Y Y Y Y

Y (16)

Concerning the initial conditions of the problem we note that

( ) ( )( )( ) ( ) ( )

( ) ( )( ) ( ) ( )

0 0

0 0

1 2

1 2

1 0 2 0 0 1 2

,0

exp , ,0 ,

exp ,c c

c

t t

t t

C C

C

i x i y dx dy

i x t x t dx

υ δ υ δ

υ δ υ δ

υ υ υ υ

∞ ∞

′+ =

′= + + =

= + =

∫ ∫

i i

i i

xy

xy

x

Y

c

c Y

(17)

Page 148: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

136 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

5.2.3. Non-Autonomous Dynamical Systems under Stochastic Excitation I – Case of m.s. Continuous Excitation Let ( )( ), ,Ω ΩU c be a probability space and ( ),N

cC I∞ , [ )0 ,I t= ∞ be the separable, real,

inner product space of infinitely differentiable functions with values in NU ⊂ . ( ),NcC∞U

will denote the σ− field generated by the open subsets of ( ),NcC I∞ . Let the dynamical system

( ) ( ) ( )( ); , ; , ;t t t tβ β βx = F x y a.s. (18)

( ) ( )0 0;t β βx = x , a.s. (19)

where, a) ( ); : MI×Ω→i iy is a mean square continuous stochastic process defined onto the

probability space ( )( ), ,Ω ΩU c with the associated probability space

( )( ), ,, ,M Mc cC C∞ ∞U c y , and with characteristic functional Y y .

b) ( )0 : NΩ→ix is a random variable with the associated probability space

( )( )0, ,M MU c x with characteristic function 0Y .

c) ( ), , : N M NI× × →i i iF is a measurable, deterministic function that can be

expanded into a series of the form

( ) ( ) ( )

( )( ),

0 0 0 00 0

, , , ,K L k l

k l

k l

t t= =

⎧ ⎫⎪ ⎪⎪ ⎪= ∇ ∇ ⊗ − −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑∑ y xF x y F x y y y x x (20)

where m l∇ ∇y x is the tensor product lm

∇ ⊗ ⊗∇ ⊗∇ ⊗∇… …y y x x , ( )

( )( ),

0 0

k l

⊗ − −y y x x is the

tensor product ( ) ( ) ( ) ( )0 0 0 0

k l

− ⊗ ⊗ − ⊗ − ⊗ ⊗ −… …y y y y x x x x and the symbol denotes

the inner product between the two tensors. For the sequel we assume, for simplicity that 0 0=x and 0 0=y .

Theorem 2.1 : The problem Σ1 is equivalent to the functional differential equation

( ) ( ) ( ) ( ) ( ) ( )0 00 0

1 1, , , ,K L

k l l ku t u t v tk l

k l

d ti dt i +

= =

⎡ ⎤∇ = ∇ ∇ ∇ ∇⎢ ⎥⎣ ⎦ ∑∑xy y x xyu v F x y u vY Y (21)

( ) ( )0 0,0tδ⋅ =Yxy υ υY , N∈υ (22)

for every ( ),NcC I∞∈u , ( ),M

cC I∞∈v and t I∈ , [ )0 ,I t= ∞ . Where xyY is the joint

characteristic functional of the s.p.’s ( );t βx , ( );t βy , t I∈

( ) ( ) ( ) ( )1 2

, , ,uNu u uξ

δ δ δ

δ ξ δ ξ δ ξ

⎛ ⎞⎟⎜ ⎟⎜∇ = ⎟⎜ ⎟⎟⎜⎝ ⎠…xy xy xyY Y Y

and ( ) ( ) ( ) ( )1 2

, , ,vMv v vξ

δ δ δ

δ ξ δ ξ δ ξ

⎛ ⎞⎟⎜ ⎟⎜∇ = ⎟⎜ ⎟⎟⎜⎝ ⎠…xy xy xyY Y Y

( Σ1 )

Page 149: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.2. HOPF-TYPE FDEs FOR THE CHARACTERISTIC FUNCTIONAL OF DYNAMICAL SYSTEMS 137

are the Volterra gradients of the functional xyY with respect to every argument ,u v and

( ) ( )l k

u t v t∇ ∇ is the operator defined by the ( ),l k -tensor product of the operators ( )u ξ∇ and ( )v ξ∇ ,

i.e. ( ) ( ) ( ) ( ) ( ) ( )l k

u t v t u t u t v t v t

l k

∇ ∇ =∇ ⊗ ⊗∇ ⊗∇ ⊗∇… … With ( ) ( ) ( )0 1 2 0, , ,t N tδ υ υ υ δ⋅ = ⋅ −i … iυ we

denote the vector valued Dirac delta function centred at zero.

Proof : Let c be the joint probability measure. Multiplying every equation

( ) ( ) ( )( ); , ; , ;n nx t F t t tβ β β= x y 1, 2, ,n N= …

with exp , ,i i+x u y v and integrating over , ,N Mc cC C∞ ∞× with respect to the probability

measure c we will have for every 1,2, ,n N= … ,

( ) ( )

( ) ( )

, ,

, ,

; exp , , ,

, , exp , , ,

N Mc c

N Mc c

n

C C

n

C C

x t i i dx dy

F t i i dx dy

β∞ ∞

∞ ∞

×

×

+

+

c

c

xy

xy

x u y v =

x y x u y v (23)

For the l.h.s of the above equation we can easily notice that,

( ) ( )( )( ), ,

,1; exp , , ,N M

c c

nnC C

dx t i i dx dyi dt u t

βδ

δ∞ ∞×

+∫ c xyxy

u vx u y v =

Y (24)

For the r.h.s. we have using (20)

( ) ( ), ,

, , exp , , ,N M

c c

n

C C

F t i i dx dy∞ ∞×

+ =∫ c xyx y x u y v

( ) ( )

( ), ,

,

0 0

,0 ,0 exp , , ,N M

c c

K L k lk l

nk lC C

F t i i dx dy∞ ∞ = =×

⎧ ⎫⎪ ⎪⎪ ⎪= ∇ ∇ ⊗ + =⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑∑∫ cy x xyy x x u y v

( )( )

( ), , 0 0 ,

, 0,0exp , , ,

N MN Mc c

k lK Ln

k l I IC Cl k

F ti i dx dy

∞ ∞

+

= = ∈ ××= ∧ =

∂= + =

∂ ∂∑∑ ∑∫ ci jxy

i j i ji j

x y x u y vx y

( ) ( )( ) , ,0 0 ,

,0 ,01 exp , , ,N M N M

c c

k lK Ln k l

k lk l I I C C

l k

F ti i i dx dy

i ∞ ∞

++

+= = ∈ × ×

= ∧ =

∂= + =

∂ ∂∑∑ ∑ ∫ ci jxy

i j i ji j

x y x u y vx y

( ) ( )

( ) ( )( )0 0 ,

,,0 ,01N M

k lk lK Ln

k lk l I I

l k

F ti t tδ δ

δ ++

+= = ∈ ×

= ∧ =

∂= ⋅ =

∂ ∂∑∑ ∑ xy

i j i j i ji j

u vx y u v

Y

( ) ( ) ( ) ( )0 0

1,0,0 ,K L

k l l kn u t v tk l

k l

F ti +

= =

= ∇ ∇ ∇ ∇∑∑ y x xy u vY .

Where we have used the notation,

1 21 2

Nii iNx x x∂ =∂ ⋅∂ ⋅∂…ix

1 21 2

Nii iNx x x= ⋅ ⋅ ⋅…ix

Page 150: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

138 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

Hence, we have proved the equality, ( )( )

( ) ( ) ( ) ( )0 0

,1 1,0,0 ,K L

k l l kn u t v tk l

k ln

d F ti dt u t iδ

δ+

= =

= ∇ ∇ ∇ ∇∑∑xyy x xy

u vu v

YY (25)

or equivalently,

( ) ( ) ( ) ( ) ( ) ( )0 00 0

1 1, , , ,K L

k l l ku t u t v tk l

k l

d ti dt i +

= =

⎡ ⎤∇ = ∇ ∇ ∇ ∇⎢ ⎥⎣ ⎦ ∑∑xy y x xyu v F x y u vY Y (26)

Concerning the initial conditions, we have by direct calculation of the l.h.s.of equation (22)

( )( ) ( )0 0

, ,

, 0 exp , ,N M

c c

t t

C C

i dx dyδ δ∞ ∞×

⋅ = ⋅ =∫ixy xyxY cυ υ

( ) ( ), ,

01

exp ,N M

c c

N

n nnC C

i x dx dyυ∞ ∞ =×

⎧ ⎫⎪ ⎪⎪ ⎪= =⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑∫ Yxyc υ (27)

Example 2.2 [One – Dimensional Non-Linear Oscillator] : To illustrate the above Theorem we will study the special case of the one-dimensional non-linear oscillator, i.e. the system

( ) ( ) ( )( ) ( ); ; , ; ;mx t f x t x t y tβ β β β+ = a.s. (28)

( ) ( ) ( ) ( )0 0 0 0; , ; ,x t x t x xβ β β β⎡ ⎤ ⎡ ⎤⎣ ⎦ ⎣ ⎦= a.s. (29)

where, we have made the same hypothesis as in Theorem 5.1. We also assume that ( ),f i i can

be expanded in the form

( )1 2 1 20 0

,N M

i jij

i j

f x x a x x= =

=∑∑ (30)

Transforming the above equation into a state-space form we have

( ) ( )1 2; ;x t x tβ β=

( ) ( ) ( )( ) ( )2 2 1; ; , ; ;mx t f x t x t y tβ β β β+ =

( ) ( ) ( ) ( )1 0 2 0 0 0; , ; ,x t x t x xβ β β β⎡ ⎤ ⎡ ⎤⎣ ⎦ ⎣ ⎦=

By direct application of the Theorem 2.1 we will have the two functional differential equations

( )( )

( )( )

1 2 1 2

1 2

, , , ,u u v u u vddt u t u t

δ δ

δ δ=xy xyY Y

( )( )

( )( ) ( ) ( ) ( )

( )( )

1 2 1 2

0 02 2 1 2 1 1 1

1 2

, , , ,

, ,1j

n mN Mnm

n mn m n n n m t

u u v u u vam di dt u t i u u u u

u u vi v t

ξ

δ δ

δ δ ξ δ ξ δ ξ δ ξ

δ

δ

+

+= = + + =

+ ⋅ =

=

∑∑ … …xy xy

xy

Y Y

Y

Substituting the first Functional Equation into the second we have,

( Σ1a )

Page 151: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.2. HOPF-TYPE FDEs FOR THE CHARACTERISTIC FUNCTIONAL OF DYNAMICAL SYSTEMS 139

( )( )

( )( ) ( )

( )( )

2

20 0 1 1

1,

, ,

,1

j

n mnN Mnm

n mtn m n n mt

j n m

u v u vam d di d u i d d u u

u vi v t

ξξ

δ δ

ξ δ ξ ξ ξ δ ξ δ ξ

δ

δ

+

+== = +=

= +

⎛ ⎞⎟⎜ ⎟⎜ ⎟⎜ ⎟⋅ + ⋅ ⋅ =⎜ ⎟⎜ ⎟⎜ ⎟⎟⎜⎝ ⎠

= ⋅

∑∑…

… …xy xy

xy

Y Y

Y

for every ( )cu C I∞∈ , ( )cC Iv ∞∈ and t I∈ with

( ) ( )( ) ( )0 01 2 0 1 2,0 ,t tυ δ υ δ υ υ′+ =i ixyY Y , ( ) 2

1 2,υ υ ∈

Page 152: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

140 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

5.2.4. Non-Autonomous Dynamical Systems under Stochastic Excitation II – Case of Independent Increment Excitation Let ( )( ), ,Ω ΩU c be a probability space and ( ),N

cC I∞ be the separable, real, inner product

space of square integrable functions with values in NU ⊂ . ( )( ),NcC I∞U will denote the

σ− field generated by the open subsets of ( ),NcC I∞ . Let the dynamical system

( ) ( )( ) ( )( ) ( ); , ; , ; ;t td t t t dt t t d tβ β β β+x = F x G x y a.s. (31)

( ) ( )0 0;t β βx = x , a.s. (32)

where, a) ( ); : MI×Ω→i iy is a stochastic process with independent increments defined onto

the probability space ( )( ), ,Ω ΩU c with the associated probability space

( )( ), ,, ,M Mc cC C∞ ∞U c y , and with characteristic functional Y y .

b) ( )0 : NΩ→ix is a random variable with the associated probability space

( )( )0, ,N N

xU c with characteristic function 0Y .

c) ( ), : N NT× →i iF is a measurable, deterministic function that can be expanded into

a series of the form

( ) ( ) ( ) 0 00

, ,L l

l

l

t t=

= ∇ ⊗ −∑ xF x F x x x (33)

d) ( ), : N N MT ×× →i iG is a measurable, deterministic function that can be expanded

into a series of the form

( ) ( ) ( ) 0 00

, ,K k

k

k

t t=

= ∇ ⊗ −∑ xG x G x x x (34)

where l∇x is the tensor product l

∇ ⊗∇…x x , ( )0

l⊗ −x x is the tensor product

( ) ( )0 0

l

− ⊗ ⊗ −…x x x x and the symbol denotes the inner product between the two tensors.

For the sequel we assume, for simplicity that 0 0=x and 0 0=y .

Remark 2.3 : We must note that since ( ); : mI×Ω→i iy is a stochastic process with

independent increments, it has nowhere differentiable sample functions. This explains the fact that the equation describing the dynamical system is in terms of differentials.

( Σ2 )

Page 153: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.2. HOPF-TYPE FDEs FOR THE CHARACTERISTIC FUNCTIONAL OF DYNAMICAL SYSTEMS 141

Theorem 2.4 : The problem Σ2 defined above, is equivalent with the functional differential equation

( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

0

10

1 1, ,0 ,

1,0 ,

Ll l

t u t u tll

Kk k

tu t v tkk

d t dti i

t di

=

+=

⎡ ⎤∇ = ∇ ∇ +⎢ ⎥⎣ ⎦

⎡ ⎤+ ∇ ∇ ∇⎢ ⎥⎣ ⎦

xy x xy

x xy

u v F u v

G u v

Y Y

Y (35)

( ) ( )0 0,0tδ⋅ =Yxy υ υY , N∈υ (36)

for every ( ),NcC I∞∈u , ( ),M

cC I∞∈v and t I∈ . Where xyY is the joint characteristic

functional of the s.p.’s ( );t βx , ( );t βy , t I∈

( ) ( ) ( ) ( )1 2

, , ,uNu u uξ

δ δ δ

δ ξ δ ξ δ ξ

⎛ ⎞⎟⎜ ⎟⎜∇ = ⎟⎜ ⎟⎟⎜⎝ ⎠…

Y Y Yxy xy xy and ( ) ( ) ( ) ( )1 2

, , ,vMv v vξ

δ δ δ

δ ξ δ ξ δ ξ

⎛ ⎞⎟⎜ ⎟⎜∇ = ⎟⎜ ⎟⎟⎜⎝ ⎠…

Y Y Yxy xy xy

are the Volterra gradients of the functional xyY with respect to every argument ,u v and ( )l

u t∇

is the operator defined by the l -tensor product of the operator ( )u ξ∇ , i.e.

( ) ( ) ( )l

u t u t u t

l

∇ =∇ ⊗ ⊗∇… With ( ) ( )0 1 2 0, , ,t N tδ υ υ υ δ −⋅ = ⋅… iυ we denote the vector valued

Dirac delta function centred at zero.

Proof : Let c xy be the joint probability measure. Multiplying every equation

( ) ( )( ) ( )( ) ( )1

; , ; , ; ;M

t n n nm t mm

d x t F t t dt G t t d y tβ β β β=

+∑= x x 1, 2, ,n N= …

with exp , ,i i+x u y v and integrating over , ,N Mc cC C∞ ∞× with respect to the probability

measure c xy we will have for every 1,2, ,n N= … ,

( ) ( )

( ) ( )

( ) ( ) ( )

, ,

, ,

, , 1

exp , , ,

, exp , , ,

, ; exp , , ,

N Mc c

N Mc c

N Mc c

t n

C C

n

C C

M

nm t mmC C

d x t i i dx dy

F t dt i i dx dy

G t d y t i i dx dyβ

∞ ∞

∞ ∞

∞ ∞

×

×

+

= + +

+ +

∑∫

c

c

c

xy

xy

xy

x u y v =

x x u y v

x x u y v

For the l.h.s of the above equation we can easily notice that,

( ) ( )( )( ), ,

,1; exp , , ,N M

c c

t n tnC C

d x t i i dx dy di u t

βδ

δ∞ ∞×

+∫ c xyxy

u vx u y v =

Y

For the first term of the r.h.s. we have using (33)

( ) ( ), ,

, exp , , ,N M

c c

n

C C

F t i i dx dy∞ ∞×

+ =∫ c xyx x u y v

Page 154: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

142 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( ) ( ), , 0

, 0 exp , , ,N M

c c

L ll

nlC C

F t i i dx dy∞ ∞ =×

= ∇ ⊗ + =∑∫ cx xyx x u y v

( ) ( ), , 0

, 0exp , , ,

NN Mc c

lLn

l IC Cl

F ti i dx dy

∞ ∞ = ∈×=

∂= + =

∂∑∑∫ cixy

i ii

x x u y vx

( ) ( ), ,0

, 01 exp , , ,N N M

c c

lLn l

ll I C C

l

F ti i i dx dy

i ∞ ∞= ∈ ×=

∂= + =

∂∑ ∑ ∫ cixy

i ii

x x u y vx

( ) ( )( )

( ) ( ) ( )0 0

,0 ,1 1,0 ,N

l lL Ln l l

n u tl ll lI

l

F tF t

i t iδδ

= =∈=

∂= ⋅ = ∇ ∇

∂∑ ∑ ∑ x xyi i ii

u vu v

x uY

Y .

Where we have used the notation,

1 21 2

Nii iNx x x∂ =∂ ⋅∂ ⋅∂…ix

1 21 2

Nii iNx x x= ⋅ ⋅ ⋅…ix

For the second term of the r.h.s. we have using (34)

( )( ) ( ) ( ), , 1

, exp , , ,N M

c c

M

nm t mmC C

G t t d y t i i dx dy∞ ∞ =×

+ =∑∫ c xyx x u y v

( ) ( ) ( ) ( ), , 1 0

,0 exp , , ,N M

c c

M K kk

nm t mm kC C

G t t d y t i i dx dy∞ ∞ = =×

= ∇ ⊗ + =∑∑∫ cx xyx x u y v

( )( ) ( ) ( )

, , 1 0

,0exp , , ,

NN Mc c

kM Knm

t mm k IC C

k

G tt d y t i i dx dy

∞ ∞ = = ∈×=

∂= + =

∂∑∑∑∫ cixy

i ii

x x u y vx

( )( ) ( ) ( )

, ,

11

1 0

,01 exp , , ,N N M

c c

kM Knm k

t mkm k I C C

k

G ti t d y t i i dx dy

i ∞ ∞

++

= = ∈ ×=

∂= + =

∂∑∑ ∑ ∫ cixy

i ii

x x u y vx

( )

( )( )( )1

1 0

,,01N

k kM Knm

tkm k I m

k

G td

i t v tδ δ

δδ+

= = ∈=

⎡ ⎤∂ ⎢ ⎥= =⎢ ⎥∂ ⎢ ⎥⎣ ⎦∑∑ ∑ xy

i i ii

u vx u

Y

( ) ( )

( )( )1

1 0

,1,0M K

k knm tu tk

m k m

G t di v tδ

δ+

= =

⎡ ⎤⎢ ⎥= ∇ ∇ ⎢ ⎥⎢ ⎥⎣ ⎦

∑∑ xyx

u vY.

Hence we have proved the equality,

( )( )

( ) ( ) ( )

( ) ( )( )( )

0

11 0

,1 1,0 ,

,1,0

Ll l

t n u tlln

M Kk k

nm tu tkm k m

d dt F ti u t i

G t di v t

δ

δ

δ

δ=

+= =

= ∇ ∇ +

⎡ ⎤⎢ ⎥+ ∇ ∇ ⎢ ⎥⎢ ⎥⎣ ⎦

∑∑

xyx xy

xyx

u vu v

u v

YY

Y

or equivalently,

Page 155: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.2. HOPF-TYPE FDEs FOR THE CHARACTERISTIC FUNCTIONAL OF DYNAMICAL SYSTEMS 143

( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

0

10

1 1, ,0 ,

1,0 ,

Ll l

t u t u tll

Kk k

tu t v tkk

d t dti i

t di

=

+=

⎡ ⎤∇ = ∇ ∇ +⎢ ⎥⎣ ⎦

⎡ ⎤+ ∇ ∇ ∇⎢ ⎥⎣ ⎦

xy x xy

x xy

u v F u v

G u v

Y Y

Y

Concerning the initial conditions, we have by direct calculation of the l.h.s. of equation (36)

( ) ( )0 0

, ,

, 0 exp , ,N M

c c

t t

C C

i dx dyδ δ∞ ∞×

⋅ = ⋅ =∫xy xyxY cυ υ

( ) ( ), ,

01

exp ,N M

c c

N

n nnC C

i x dx dyυ∞ ∞ =×

⎧ ⎫⎪ ⎪⎪ ⎪= =⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑∫ Yxyc υ (37)

Example 2.5 [Linear Filtering System] : Let us consider as a special case of the above theorem the a linear time invariant system, i.e. a system of the form,

( ) ( ) ( ); ; ;t td t t dt d tβ β β⋅ + ⋅x = F x G w a.s. (38)

( ) ( )0 0;t β βx = x , a.s. (39)

where ( ); : MI×Ω→i iw is the standard Wiener process and N N×∈F and N M×∈G The

above system is very essential for linear system theory and linear filter design, since we have explicit expression for the spectrum of the response in terms of the excitation spectrum (Wiener-Khinchin relations). For a detailed discussion we refer to PUGACHEV, V.S. & SINITSYN, (Stochastic Differential Systems). Now, let us derivate the associated functional differential equation for the joint characteristic functional of the response and the excitation. By direct application of Theorem 2.4 we have the functional differential equation

( ) ( ) ( ) ( ) ( ) ( ), , ,lt tu t u t v td dt d⎡ ⎤∇ = ⋅∇ + ⋅ ∇⎢ ⎥⎣ ⎦xy xy xyu v F u v G u vY Y Y (40)

( )( ) ( )0 0,0tδ −⋅ =i Yxy υ υY , N∈υ (41)

for every ( ),NcC I∞∈u , ( ),M

cC I∞∈v and t I∈ . Where xyY is the joint characteristic

functional of the s.p.’s ( );t βx , ( );t βw , t I∈ .

Page 156: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

144 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

5.3. Reduction of the Hopf-Type Functional Differential Equation for Special Cases In this section we will study, how the general Functional Differential Equation can be reduced to partial differential equations when the external excitation fulfils special conditions. We will start our discussion with the case of deterministic systems (absence of stochastic excitation), with random initial conditions. Using the Hopf-type FDE we will prove Liouville Equation for the characteristic function of the response. 5.3.1. Systems with Random Initial Conditions – Liouville Equation Let ( )( ), ,Ω ΩU c be a probability space and ( ),N

cC I∞ be the separable, real, inner product

( 2_ inner product) space of infinitely differentiable functions with values in NU ⊂ . ( ),N

cC∞U will denote the σ− field generated by the open subsets of ( ),NcC I∞ . Let the

dynamical system

( ) ( )( ); , ;t t tβ βx = F x a.s. (1)

( ) ( )0 0;t β βx = x , a.s. (2)

where, a) ( )0 : NΩ→ix is a random variable with the associated probability space

( )( )0, ,N N

xU c with characteristic function ( )0 , N∈Y υ υ .

b) ( ), : N NI× →i iF is a measurable, deterministic function that can be expanded into

a series of the form

( ) ( ) ( ) 0 00

, ,L l

l

l

t t=

= ∇ ⊗ −∑ xF x F x x x (3)

For the sequel we assume, for simplicity that 0 0=x . For the case where an explicit solution of the above system of equations can be found, the solution would have the following form

( ) ( )0; ,t tβ β⎡ ⎤⎣ ⎦x = g x (4)

The mapping (4) characterize at arbitrary time t the state of a system which has an initial state described by the random variable (2). In other words, these functions represent at time t a transformation of the random vector (2). It is clear that the probability distribution of the response can be obtained by use of the theorem of transformations of random vectors. Let us assume that the transformation defined by (4) is continuous with respect to ( )0 βx (for

each t), has continuous partial derivatives with respect to ( )0 βx , and defines a one-to-one

mapping. Then, if the inverse transformation is written as

( ) ( )0 , ;t tβ β⎡ ⎤⎣ ⎦x = h x (5)

the probability density of the response takes the form

( Σ3 )

Page 157: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.3. REDUCTION OF THE HOPF-TYPE FUNCTIONAL DIFFERENTIAL EQUATIONS FOR SPECIAL CASES 145

( ) [ ]( ) [ ]0

,, , ,

tf t f t t

∂= ⋅

h xx h x

x (6)

For the case where no explicit solution of the (1) can be found, we return to the Hopf-type FDE associated with the stochastic initial value problem described above. Using Theorem 2.1 we have

( ) ( ) ( ) ( ) ( )0

1 1,0L

l lu t u tl

l

d ti dt i=

⎡ ⎤∇ = ∇ ∇⎢ ⎥⎣ ⎦ ∑x x xu F uY Y (7)

( ) ( )0 0,0 , Nδ⋅ = ∈Yxy υ υ υY (8)

for every ( ),NcC I∞∈u and t I∈ . Where xY is the characteristic functional of the s.p.

( );t βx , t I∈ . For the case described above the following theorem holds

Theorem 3.1 [Liouville equation] : Let the dynamical system Σ3 described by (1)-(2). Then the FDE (7)-(8) which describes the system can be reduced to the equation

( )

( ) ( ) 11 0

,,0 , 0 , ,

N Ll l Nn

nln l

tF t t t I

t iφ υ

φ+= =

∂+ ∇ ∇ = ∈ ∈

∂ ∑∑xx xυ

υυ υ (9)

( ) ( )00, , Nφ = ∈Yx υ υ υ (10)

where ( ),tφ υ is the characteristic function of the response.

Proof : Let’s consider the component wise form of the FDE (9), used for the derivation at Theorem 2.1.

( )( )

( ) ( )( )0

,01 1N

llLn

ll In

l

F tdi dt u t i tδ δ

δ δ= ∈

=

∂= ⋅

∂∑ ∑x x

i i ii

u ux u

Y Y , 1, 2, ,n N= …

Setting to the system of functional differential equations ( ) ( )1 2, , N tυ υ υ δ= ⋅ −… iu and

substituting ( ) ( ),

exp ,N

cC

i dx∞

= ∫x xu x uY c we have

( ) ( ) ( ), 1

expN

c

N

n k kkC

x t i x t dυ∞ =

⎧ ⎫⎪ ⎪⎪ ⎪ =⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑∫ x xc

( )( ) ( ) ( )

,0 1

,01 expN N

c

lL nn l

k kll kI C

l

F ti t i x t d

∞= =∈=

⎧ ⎫∂ ⎪ ⎪⎪ ⎪⎡ ⎤= ⋅ ⇔⎨ ⎬⎣ ⎦ ⎪ ⎪∂ ⎪ ⎪⎩ ⎭∑ ∑ ∑∫

ix

i ii

x xx

c

Where we have used the notation,

1 21 2

Nii iNx x x∂ =∂ ⋅∂ ⋅∂…ix

1 21 2

Nii iNx x x= ⋅ ⋅ ⋅…ix

Multiplying both sides with niυ and summing up for all 1, 2, ,n N= … , we will have,

( ) ( ) ( ),1 1

expN

c

N N

n n k kn kC

i x t i x t dυ υ∞= =

⎧ ⎫⎪ ⎪⎪ ⎪ =⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑ ∑∫ x xc

Page 158: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

146 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( )( ) ( ) ( )

,1 0 1

,01 expN N

c

lN L nn l

n k kln l kI C

l

F ti i t i x t d

iυ υ

∞= = =∈=

⎧ ⎫∂ ⎪ ⎪⎪ ⎪⎡ ⎤= ⋅ ⇔⎨ ⎬⎣ ⎦ ⎪ ⎪∂ ⎪ ⎪⎩ ⎭∑ ∑ ∑ ∑∫

ix

i ii

x xx

c

But

( ) ( ) ( )( )

,1 1

,exp

Nc

N N

n n k kn kC

ti x t i x t d

υ υ∞= =

⎧ ⎫ ∂⎪ ⎪⎪ ⎪ =⎨ ⎬⎪ ⎪ ∂⎪ ⎪⎩ ⎭∑ ∑∫ x

x xcυ

and ( )

( ) ( ) ( ),1 0 1

,01 expN N

c

lN L nn l

n k kln l kI C

l

F ti i t i x t d

iυ υ

∞= = =∈=

⎧ ⎫∂ ⎪ ⎪⎪ ⎪⎡ ⎤⋅ =⎨ ⎬⎣ ⎦ ⎪ ⎪∂ ⎪ ⎪⎩ ⎭∑ ∑ ∑ ∑∫

ix

i ii

x xx

c

( ) ( )1 0

,0 ,N

l lN Lnn

ln l I

l

F t tii

φυ

= = ∈=

∂ ∂= ⋅

∂ ∂∑∑ ∑ x

i i ii

υ

Hence, we have proved the equality,

( ) ( ) ( )

1 0

, ,0 ,N

l lN Lnn

ln l I

l

t F t tit i

φ φυ

= = ∈=

∂ ∂ ∂= ⋅

∂ ∂ ∂∑∑ ∑x x

i i ii

xυ υ

υ (11)

or equivalently,

( )

( ) ( ) 11 0

,,0 , 0

N Ll ln

nln l

tF t t

t iφ υ

φ+= =

∂+ ∇ ∇ =

∂ ∑∑xx xυ

υυ (12)

Remark 3.2 : Taking the Fourier transform of the equation (11) or (12) we can calculate that the equation governing the probability density of the response is

( )

( ) ( )1

,, , 0 , ,

NN

nn n

f tf t F t t I

t x=

∂ ∂ ⎡ ⎤+ = ∈ ∈⎣ ⎦∂ ∂∑xx

xx x x (13)

which is the Liouville equation from the general theory of dynamical systems. It is worth emphasizing the fact that equations of the form (13) expresses, in general conservation of some quantities in time. For example in continuum mechanics, such equation is known as the continuity equation, the equation of conservation of mass or the transport equation. Here, the Liouville equation expresses the conservation of probability during a motion of the dynamical system.

Page 159: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.3. REDUCTION OF THE HOPF-TYPE FUNCTIONAL DIFFERENTIAL EQUATIONS FOR SPECIAL CASES 147

5.3.2. Moment Equations from the FDE As we mention at the beginning a very important property of the formulation based on the characteristic functional, is that we can recover all the statistical information of the problem. In this section we will show, how we can derive the set of moment equations, associated with the stochastic differential equation, from the functional differential equation. Moreover, based on this analysis we will derive an equivalent set of integrodifferential equations with the functional differential equation derived at Theorem 2.1. As we prove at the previous section the stochastic differential equation Σ1 is equivalent to the FDE

( ) ( ) ( ) ( ) ( ) ( )0 0

1 1, , 0,0 ,K L

k l l ku t u t v tk l

k l

d ti dt i +

= =

⎡ ⎤∇ = ∇ ∇ ∇ ∇⎢ ⎥⎣ ⎦ ∑∑xy y x xyu v F u vY Y (14)

( ) ( )0 0,0δ⋅ =Yxy υ υY , N∈υ (15)

for every ( ),NcC I∞∈u , ( ),M

cC I∞∈v and t I∈ .

First we recall some important results concerning the polynomial expansion of functionals.

Theorem 3.3 [Taylor’s formula with integral reminder]: Let , ,: N Mc cC C∞ ∞× →=hZ

be a functional of class 1r+V . If the interval [ ], +a a h is contained in h then

( ) ( ) ( )[ ] ( )[ ]

( )[ ]( )

( )[ ]

2

11

0 1

1 ,2

11 , , , ,! !

rr r

r times r times

tt dt

N N

δ δ

δ δ +

+

+ = + + + +

−+ + +∫

… …

a h a a h a h h

a h h a h h h

Z Z Z Z

Z Z

Proof : For the proof we refer to CARTAN, H., (Differential Calculus. p.70). Theorem 3.4 [Taylor’s formula with Lagrangian reminder]: Let the 1r+V functional

, ,: N Mc cC C∞ ∞× →=hZ . If

( )1 for r Mδ + ≤ ∈a a hZ

then

( ) ( ) ( )[ ] ( )[ ]( )

11 , ,

! 1 !

rr

r times

MN r

δ δ+

+ − − − − ≤+

… …h

a h a a h a h hZ Z Z Z

Proof : For the proof we refer to CARTAN, H., (Differential Calculus. p.70).

Now consider the functional , ,: N Mc cC C∞ ∞× →=hZ given by

( ) ( ) ( )

( ) ( ) ( ) ( )0 0

1, ; ,

1,0,0 ,

u t

K Lk l l k

u t v tk lk l

dti dt

ti +

= =

⎡ ⎤≡ ∇ −⎢ ⎥⎣ ⎦

− ∇ ∇ ∇ ∇∑∑

xy

y x xy

u v u v

F u v

Z Y

Y (16)

for every t I∈ . Let us assume that Z is of class ∞V , then, by Theorem 3.3 it can be expanded to the following form

Page 160: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

148 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( ) ( )[ ][ ]0 0

1, ; 0,0; , , , ,! !

n m

n m n times m times

t tn m

δ δ∞ ∞

= =

=∑∑ … …u vu v u u v vZ Z (17)

for every t I∈ . It is clear that ( ), ; 0t =Z u v iff

( )[ ][ ] ( ) 20,0; , , , , 0, for every , and for every n m

n times m times

t n m t Iδ δ += ∈ ∈… …u v u u v vZ .

Clearly every term ( )[ ][ ]0,0; , , , ,n m

n times m times

tδ δ … …u v u u v vZ for all ( ) 2,n m +∈ of the sum (17) is a

homogeneous functional polynomial, i.e. there exists an ( ),n m −multilinear mapping

( ) ( ), ,: n mN Mc cC C I∞ ∞× × →^ with the symmetry property

[ ][ ]1 11 1, , , , , , , ,

n mn m i i j j⎡ ⎤ ⎡ ⎤= ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦… … … …u u v v u u v v^ ^

for every permutation ( )1, , ni i… and ( )1, , mj j… , such that,

( )[ ][ ] [ ][ ]0,0; , , , , , , , ,n m

n times m times n times m times

tδ δ =… … … …u v u u v v u u v vZ ^

For the homogeneous functional polynomials the following important property holds, Preposition 3.5 : Let the homogeneous functional polynomial of degree n , ,: N

n cCΦ ∞ →

described by the corresponding n−multilinear, symmetric function ( ),: nNcC∞ →^ . Then

the polynomial ,: Nn cCΦ ∞ → is the zero polynomial iff

1 2, , , 0

n⎡ ⎤ =⎢ ⎥⎣ ⎦…^ t t tδ δ δ , for every ( )1 2, , , n…t t t with i

N timesI I∈ × ×…t .

Proof : Clearly, the above assertion for a functional polynomial of degree n , ,: Nn cCΦ ∞ → is

equivalent with the same assertion for a functional polynomial of degree n N⋅ , :n N cCΦ ∞⋅ → .

Hence, its sufficient to prove the preposition for the case when 1N = . The general form of a homogeneous functional polynomial with 1N = will be of the form

[ ] ( ) ( ) ( ) ( )1 1 2 1 1 2 2 1, , , , ,n n n n nu u s s s u s u s u s ds dsK… =∫∫ ∫… … … …^

If

1 2, , , 0

n⎡ ⎤ =⎢ ⎥⎣ ⎦…^ t t tδ δ δ , for every ( )1 2, , , nt t t… with it I∈

then ( )1 2, , , 0ns s sK =… , for every ( )1 2, , , ns s s… with is I∈

Hence, nΦ will be the zero polynomial. If nΦ is the zero polynomial then using Schwarz’s Inequality we will have

[ ] ( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

1 1 2 1 1 2 2 1

1 2 1 2 11

, , , , ,

, , , 0

n n n n n

n

n i i i n ni

u u s s s u s u s u s ds ds

s s s u s u s u s ds ds

K

K=

… = ≤

⎛ ⎞⎟⎜ ⎟⎜= =⎟⎜ ⎟⎟⎜⎝ ⎠

∫∫ ∫

∏ ∫∫ ∫

… … … …

… … … …

^

Page 161: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.3. REDUCTION OF THE HOPF-TYPE FUNCTIONAL DIFFERENTIAL EQUATIONS FOR SPECIAL CASES 149

Hence, we have proved the assertion

Thus we have proved the following Theorem 3.6 : Let the Functional Differential Equation

( ) ( ) ( ) ( ) ( ) ( )0 0

1 1, , 0,0 ,K L

k l l ku t u t v tk l

k l

d ti dt i +

= =

⎡ ⎤∇ = ∇ ∇ ∇ ∇⎢ ⎥⎣ ⎦ ∑∑xy y x xyu v F u vY Y

( ) ( )0 0,0δ⋅ =Yxy υ υY , N∈υ

for every ( ),NcC I∞∈u , ( ),M

cC I∞∈v and t I∈ . We assume that xyY is of class ∞V . Then the

above Functional Differential Equation is equivalent with the infinite set of equations,

( ) ( )

( ) ( ) ( ) ( )

1 1

1 10 0

1 0,0 , , , , ,

1, 0,0 0,0 , , , , ,

k k

n m

n mu t

K Lk l n m l k

u t v tk lk l

di dt

ti

δ δ

δ δ+= =

⎡ ⎤⎡ ⎤∇ =⎢ ⎥⎢ ⎥⎣ ⎦⎣ ⎦

⎡ ⎤= ∇ ∇ ∇ ∇ ⎢ ⎥⎣ ⎦∑∑

… …

… …

u v xy t t s s

y x u v xy t t s sF

δ δ δ δ

δ δ δ δ

Y

Y (18)

for all ( ) 2,n m +∈ and every t , ( )1 2, , , k…t t t and ( )1 2, , , l…s s s with t I∈ , Ni I∈t , M

i I∈s .

The above set of equations will be called the corresponding moment equations of problem 1Σ .

Remark 3.7 : For the case when the first ( ),k l moment equations satisfied, we can get, using

Theorem 3.3, useful bounds for the error of the estimation.

Remark 3.8 : The above Theorem presents a valuable method for solving numerically functional differential equations of the form (7)-(8).

Example 3.9 : Let us now apply the above Theorem to the specific case of the Duffing Oscillator. As we saw, the corresponding functional differential equation is

( )( )

( )( )

( )( )

( )( )( )( )

32

32 3

, , , ,1

,1

u v u v u v u vm d c d k ai dt u t i dt u t i u t i u t

u vi v t

δ δ δ δ

δ δ δ δ

δ

δ

+ + ⋅ + ⋅ =

= ⋅

xy xy xy xy

xy

Y Y Y Y

Y

( )0 0,0 =Yxy δY and ( )0 0,0′ =Yxy δY

for every ( )cC Iu ∞∈ , ( )cC Iv ∞∈ and t I∈ . Applying Theorem 3.6 we will have the equivalent

set of moment equations,

( ) ( )

( ) ( )

( )

1 1 1 1

1 1 1 1

1

1 12

2

1 3

3

1

0,0 0,0

0,0 0,0

,

n n n m n n n m

n n n m n n n m

n

n m n mn mu v

t t t t t t t t t t

n m n m

t t t t t t t t t t

n m

t t t t

d dm cdt u u u v u dt u u u v u

k au u u v u u u u v u

u vv u u v

δ δδ δ

δ δ δ δ δ δ δ δ δ δ

δ δ

δ δ δ δ δ δ δ δ δ δ

δ

δ δ δ δ

+ + + +

+ + + +

+ + + +

+ + + +

+ +

+ +

+ − =

=

… … … …

… … … …

xy xy

xy xy

xy

Y Y

Y Y

Y

1n n mtuδ+ +…

(19)

for all ( ) 2,n m +∈ and every ( )1 2, , , , n mt t t t +… with it I∈ , where [ ]1 1, , , , ,

k kt t s sδ δ δ δ⎡ ⎤= ⎢ ⎥⎣ ⎦… …δ .

Page 162: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

150 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

5.3.3. Excitation with Independent Increments - Fokker-Planck Equation

In this section we will derive from the Functional Differential Equation the Fokker-Planck type equation for the characteristic function of the response. It is very important to note that the analysis below will be based only on the concept of Functional Differential Equations. Moreover the Fokker-Planck Equation derived can describe systems under any stochastic excitation with independent increments. To begin the derivation of the Fokker-Planck equation let us first consider the special case of a first order equation with cubic nonlinearity

( ) ( ) ( )3; ; ;t td x t ax t dt d y tβ β β+= a.s. (20)

( ) ( )0 0;x t xβ β= , a.s. (21)

where ( ); :y I×Ω→i i is a stochastic process with independent increments defined onto the

probability space ( )( ), ,Ω ΩU c with the induced probability space ( )( ), ,c cC C∞ ∞U c y ,

and with characteristic functional Y y . The initial condition ( )0 :x Ω→i is described by a

random variable with the induced probability space ( )( )0

, , xU c with characteristic

function 0Y . By Theorem 2.4 we have the equivalent Functional Differential Equation

( )( )

( )( )

( )( )

3

3

, , ,t t

u v u v u vd a dt d

u t v tu t

δ δ δ

δ δδ

⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥=− +⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

Y Y Yxy xy xy (22)

( ) ( )0 0,0 ,υδ υ υ= ∈Y Yxy (23)

Setting to the above FDE ( )u s tυ δ= ⋅ − −i , 0v = at time t s+ , and taking the limit 0s →

we have for the l.h.s. term,

( )( )( )

( ) ( ) ( )0 0

,0lim lim exp

c

xys s xs s

C

s td id x t s i x t s dx

u t sυ δ

υδ

δ∞

→ →

⎛ ⎞⋅ − − ⎟⎜ ⎟ ⎡ ⎤⎜ = + + =⎟⎜ ⎣ ⎦⎟⎜ + ⎟⎜⎝ ⎠∫

iYc

( ) ( ) ( )( )

0 0

,1 1 1lim exp lim ,c

xs x s xs s

C

td i x t s dx d t s ds

tφ υ

υ φ υυ υ υ∞

→ →

⎡ ⎤ ∂⎢ ⎥= + = + =⎢ ⎥ ∂⎢ ⎥⎣ ⎦∫ c

For the first r.h.s. term we have,

( )( )( )

( ) ( ) ( )( )3 3

3

3 30 0

,0 ,lim lim exp

c

xy xxs s

C

s t tdt ix t s i x t s d

u t s

δ υδ φ υυ

υδ ∞→ →

− − ∂⎡ ⎤= + + =⎣ ⎦ ∂+ ∫iY

xc

Finally for the second r.h.s. term we have

( )( )( )

( ) ( ) ( )0 0

,0lim lim exp ,

c c

xys s xys s

C C

s td id y t s i x t s dx dy

v t sδ υ δ

υδ ∞ ∞

→ →

⎡ ⎤⋅ − −⎢ ⎥ = + +⎢ ⎥+⎢ ⎥⎣ ⎦∫ ∫

iYc

But ( )( ) ( ) ( )( ) ( ) ( )( )( )3exp exp expi x t s i x t i ax t s i y t s y tυ υ υ υ+ = + ⋅ + −

Hence,

Page 163: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.3. REDUCTION OF THE HOPF-TYPE FUNCTIONAL DIFFERENTIAL EQUATIONS FOR SPECIAL CASES 151

( ) ( ) ( )

( ) ( ) ( )( ) ( ) ( )( )( ) ( )

0

3

0

lim exp ,

lim exp exp ,c c

c c

s xysC C

s xysC C

id y t s i x t s dx dy

id y t s i x t i ax t s i y t s y t dx dy

υ

υ υ υ

∞ ∞

∞ ∞

+ + =

= + + ⋅ + −

∫ ∫

∫ ∫

c

c

Since the stochastic process ( ); :y I×Ω→i i has independent increments, and the quantity

( ) ( )( )3exp i x t i ax t sυ υ+ depends only to prior increments, we have,

( ) ( ) ( )( ) ( ) ( )( )( ) ( )3

0lim exp exp ,

c c

s xysC C

id y t s i x t i ax t s i y t s y t dx dyυ υ υ∞ ∞

→+ + ⋅ + − =∫ ∫ c

( ) ( ) ( )( )( ) ( )

( ) ( )( ) ( )

0

3

0

lim exp

lim expc

c

s ysC

xsC

id y t s i y t s y t dy

i x t i ax t s dx

υ

υ υ

= + + − ⋅

⋅ + =

c

c

( ) ( )( )( ) ( ) ( )( ) ( )0

1lim exp expc c

s y xsC C

d i y t s y t dy i x t dxυ υυ ∞ ∞

→= + − ⋅ =∫ ∫c c

( ) ( )( )( ) ( ) ( ) ( )0

1 lim exp expc c

s y xsC C

d i y t s y t dy i x dxυ υ ξυ ∞ ∞

⎡ ⎤⎢ ⎥= + − ⋅ =⎢ ⎥⎢ ⎥⎣ ⎦∫ ∫c c

( )( )

0

, ;1 lim ,yxs

t st ds

sφ υ

φ υυ

Δ

∂= ⋅

Thus we have proved the equality

( ) ( ) ( )( )

3

3 0

, ;, ,1 1 lim ,yx xxs

t st tds a ds t ds

t sφ υφ υ φ υ

φ υυ υ υ

Δ

∂∂ ∂=− +

∂ ∂ ∂

or equivalently,

( ) ( ) ( )( )

3

3 0

, ;, ,lim ,yx x

xs

t st ta t

t sφ υφ υ φ υ

υ φ υυ

Δ

∂∂ ∂+ =

∂ ∂ ∂

Assume now, that ( ); :y I×Ω→i i is the standard Wiener process. Then,

( ) 21, ; exp2y t s sφ υ υΔ

⎧ ⎫⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

Hence, ( ) 2

0

, ; 1lim2

y

s

t ss

φ υυΔ

∂=−

and the Fokker-Planck equation takes the form

( ) ( )( )

3 2

3

, ,,

2x x

x

t ta t

tφ υ φ υ υ

υ φ υυ

∂ ∂+ =−

∂ ∂

For the general case we have the following

Page 164: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

152 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

Theorem 3.10 : Let the dynamical system Σ2 described by (31)-(32) of Section 5.2.4. Then the FDE

( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

0

10

1 1, ,0 ,

1,0 ,

Ll l

t u t u tll

Kk k

tu t v tkk

d t dti i

t di

=

+=

⎡ ⎤∇ = ∇ ∇ +⎢ ⎥⎣ ⎦

⎡ ⎤+ ∇ ∇ ∇⎢ ⎥⎣ ⎦

xy x xy

x xy

u v F u v

G u v

Y Y

Y (24)

( )0 0,0δ⋅ =υ YY (25)

for every ( ),NcC I∞∈u , ( ),M

cC I∞∈v and t I∈ , which describes the system, can be reduced to

the partial differential equation

( )( ) ( )

( )( )( ) ( )

,

11 0

1

01

,,0 ,

, , ;lim exp

s

Nc

N Ll ln

nln l

N

m N

sC

tF t t

t i

t G t t si x t dx

s

ν νν

ν νν

φ υφ

φ υυ

+= =

Δ=

→=

∂= ∇ ∇ +

∂⎛ ⎞⎟⎜∂ ⎟⎜ ⎟⎜ ⎟ ⎧ ⎫⎪ ⎪⎝ ⎠ ⎪ ⎪+ ⎨ ⎬⎪ ⎪∂ ⎪ ⎪⎩ ⎭

∑∑

∑∑∫ c

xx x

y

x

x

υ

υυ

(26)

where ( ), : NIφ × →i i is the characteristic function of the system response ( );t βx and

( ), :sMIφ

Δ× →i i

y is the characteristic function of the independent increments

( ) ( ) ( );s t t s tβΔ = + −y y y .

Proof : We consider the component wise form of the FDE (24), used for the derivation at Theorem 2.4. ( 1, 2, ,n N= … )

( )( )

( ) ( )( )

( )( )

( )( )

0

11 0

, ,,01 1

,,01

N

N

llLn

t ll In

l

k kM Knm

tkm k I m

k

F td dt

i u t i t

G td

i t v t

δ δ

δ δ

δ δ

δδ

= ∈=

+= = ∈

=

∂= ⋅ +

⎡ ⎤∂ ⎢ ⎥+ ⎢ ⎥∂ ⎢ ⎥⎣ ⎦

∑ ∑

∑∑ ∑

xy xy

i i ii

xy

i i ii

u v u vx u

u vx u

Y Y

Y (27)

Setting to the system of functional differential equations ( ) ( )1 2, , N t sυ υ υ δ= ⋅ − −… iu at time

t s+ and t waking the limit 0s → we have for the l.h.s. term,

( )( )( )

( ) ( ) ( ),

0 01

,01 1lim lim expN

c

N

t n s ns sn n C

t sd i d x t s i x t s d

i u t s i ν νν

δυ υ

δ υ

δ∞

→ →=

⎧ ⎫⋅ − − ⎪ ⎪⎪ ⎪= + +⎨ ⎬⎪ ⎪+ ⎪ ⎪⎩ ⎭∑∫

ixyx x

Yc

υ

For the first r.h.s term we have,

( ) ( )( )( )0

0

,0,01limN

llLn

lsl I

l

t sF t si t s

δ

δ

δ→

= ∈=

⋅ − −∂ +⋅ =

∂ +∑ ∑ixy

i i ii

x uυY

( )( ) ( ) ( )

,0

0 1

,01lim expN N

c

lL Nn l

lsl I C

l

F t si t s i x t s d

i ν νν

υ∞

→= =∈

=

⎧ ⎫∂ + ⎪ ⎪⎪ ⎪⎡ ⎤= ⋅ + + =⎨ ⎬⎣ ⎦ ⎪ ⎪∂ ⎪ ⎪⎩ ⎭∑ ∑ ∑∫

ix

i ii

x xx

c

Page 165: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.3. REDUCTION OF THE HOPF-TYPE FUNCTIONAL DIFFERENTIAL EQUATIONS FOR SPECIAL CASES 153

( ) ( )0

,0 ,1N

l lLnn

ll In

l

F t tii i

φυυ = ∈

=

∂ ∂= ⋅

∂ ∂∑ ∑ x

i i ii

υ.

Where we have used the notation

1 21 2

Nii iNx x x∂ =∂ ⋅∂ ⋅∂…ix

1 21 2

Nii iNx x x= ⋅ ⋅ ⋅…ix

For the second r.h.s term we have

( )( )

( )( )( )10

1 0

,0,01limN

k kM Knm

sksm k I m

k

t sG t sd

i t s v t sδ

δ δ

δδ+→

= = ∈=

⎡ ⎤⋅ − −∂ + ⎢ ⎥⋅ =⎢ ⎥∂ + +⎢ ⎥⎣ ⎦∑∑ ∑

ixy

i i ii

x uυY

( )( )

( ) ( ) ( )

, ,

110

1 0

1

,01lim

; exp ,

N N Mc c

kM Knm k

ksm k I C C

k

N

s m

G t si t s

i

d y t s i x t s dx dyν νν

β υ

∞ ∞

++→

= = ∈ ×=

=

∂ + ⎡ ⎤= + ⋅⎣ ⎦∂

⎧ ⎫⎪ ⎪⎪ ⎪⋅ + +⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

∑∑ ∑ ∫

∑ c

i

i ii

xy

xx

Since ( )tx is m.s. continuous ( ) ( )0

lims

t s t→

+ =x x . Then

( )( )

( ) ( ) ( )

, ,

110

1 0

1

,01lim

exp ,

N N Mc c

kM Knm k

ksm k I C C

k

N

s m

G t si t s

i

d y t s i x t s dx dyν νν

υ

∞ ∞

++→

= = ∈ ×=

=

∂ + ⎡ ⎤+ ⋅⎣ ⎦∂

⎧ ⎫⎪ ⎪⎪ ⎪⋅ + + =⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

∑∑ ∑ ∫

∑ c

i

i ii

xy

xx

( )( ) ( )

( ) ( )

, ,

110

1 0

1

,01lim

exp ,

N N Mc c

kM Knm k

s mksm k I C C

k

N

G td i t y t s

i

i x t s dx dyν νν

υ

∞ ∞

++→

= = ∈ ×=

=

∂ ⎡ ⎤= + ⋅⎣ ⎦∂

⎧ ⎫⎪ ⎪⎪ ⎪⋅ + =⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

∑∑ ∑ ∫

∑ c

i

i ii

xy

xx

Using the equation of the system, or the equation that results from the FDE by setting 0=u , 0=v we will have for small s

( ) ( ) ( )( ) ( )( ) ( ) ( )1

; ; , ; , ; ; ;M

n n n nm m mm

x t s x t F t t s G t t y t s y tβ β β β β β=

⎡ ⎤+ = + + + −⎣ ⎦∑x x

Hence the last term above will be equal with ( )

( ) ( ) ( )

( )( ) ( )( ) ( ) ( ) ( )

, ,

110

1 0 1

1 1

,01lim exp

exp , , ,

N N Mc c

kM K Nnm k

s mksm k I C C

k

N M

m m mm

G td i t y t s i x t

i

i F t t s G t t y t s y t dx dy

ν νν

ν ν νν

υ

υ

∞ ∞

++→

= = =∈ ×=

= =

⎧ ⎫∂ ⎪ ⎪⎪ ⎪⎡ ⎤= + ⎨ ⎬⎣ ⎦ ⎪ ⎪∂ ⎪ ⎪⎩ ⎭

⎧ ⎫⎛ ⎞⎪ ⎪⎪ ⎪⎟⎜ ⎡ ⎤⋅ + + − =⎟⎨ ⎬⎜ ⎟⎣ ⎦⎜ ⎟⎪ ⎪⎝ ⎠⎪ ⎪⎩ ⎭

∑∑ ∑ ∑∫

∑ ∑ c

i

i ii

xy

xx

x x

Page 166: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

154 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( ) ( ) ( ) ( )

( )( ) ( ) ( ) ( )

, ,

110

1 0 1

1 1

,01lim exp

exp , ,

NN Mc c

kM K Nnm k

s mksm k IC C

k

N M

m m mm

G td i t y t s i x t

i

i G t t y t s y t dx dy

ν νν

ν νν

υ

υ

∞ ∞

++→

= = =∈×=

= =

⎧ ⎫∂ ⎪ ⎪⎪ ⎪⎡ ⎤= + ⎨ ⎬⎣ ⎦ ⎪ ⎪∂ ⎪ ⎪⎩ ⎭

⎧ ⎫⎪ ⎪⎪ ⎪⎡ ⎤⋅ + −⎨ ⎬⎣ ⎦⎪ ⎪⎪ ⎪⎩ ⎭

∑∑ ∑ ∑∫

∑∑ c

i

i ii

xy

xx

x

Thus for every 1,2, ,n N= … equation (27) takes the form

( ) ( ) ( )( ) ( )

,0

1 0

,0 ,lim exp

NNc

l lN Lnn

n s n lsl IC

l

F t tii d x t s i x t s d dsiν ν

ν

φυυ υ

∞→

= = ∈=

⎧ ⎫ ∂ ∂⎪ ⎪⎪ ⎪+ + = ⋅ +⎨ ⎬⎪ ⎪ ∂ ∂⎪ ⎪⎩ ⎭∑ ∑ ∑∫ x

i i ii

xx

υ

( )( ) ( ) ( )

( )( ) ( ) ( ) ( )

, ,

1

01 0 1

1 1

,0lim exp

exp , ,

NN Mc c

kM K Nnm kn

s mksm k IC C

k

N M

m m mm

G td i t y t s i x t

i

i G t t y t s y t dx dy

ν νν

ν νν

υυ

υ

∞ ∞

+

→= = =∈×

=

= =

⎧ ⎫∂ ⎪ ⎪⎪ ⎪⎡ ⎤+ + ⎨ ⎬⎣ ⎦ ⎪ ⎪∂ ⎪ ⎪⎩ ⎭

⎧ ⎫⎪ ⎪⎪ ⎪⎡ ⎤⋅ + −⎨ ⎬⎣ ⎦⎪ ⎪⎪ ⎪⎩ ⎭

∑∑ ∑ ∑∫

∑∑ c

i

i ii

xy

xx

x

Summing up for all 1,2, ,n N= … , we will have for the l.h.s. sum

( ) ( ) ( ),

01 1

lim expN

c

N N

n s nsn C

i d x t s i x t s dν νν

υ υ∞

→= =

⎧ ⎫⎪ ⎪⎪ ⎪+ + =⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑ ∑∫ x xc

( ) ( ),

01

lim expN

c

N

ssC

d i x t s dν νν

υ∞

→=

⎡ ⎤⎧ ⎫⎪ ⎪⎢ ⎥⎪ ⎪= + =⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎪ ⎪⎩ ⎭⎣ ⎦∑∫ x xc

( )( )

0

,lim ,ss

td t s ds

φ→

∂= + =

∂x

x

υυ

For the first r.h.s sum we get,

( ) ( )( ) ( )

1 0 1 0

,0 ,,0 ,

N

l lN L N Ln l ln n

nl ln l n lI

l

F t ti ids F t ti i

φυ υφ

= = = =∈=

∂ ∂⋅ = ∇ ∇

∂ ∂∑∑ ∑ ∑∑xx x

i i ii

x υ

υυ

υ

For the second r.h.s sum we have,

( ) ( ) ( )

( ) ( )( ) ( ) ( ) ( )

, ,

1

01 1 0

1 1 1

,0lim

exp exp , ,

NN Mc c

kN M Knm kn

s mksn m k IC C

k

N N M

m m mm

G td i t y t s

i

i x t i G t t y t s y t dx dyν ν ν νν ν

υ

υ υ

∞ ∞

+

→= = = ∈×

=

= = =

∂ ⎡ ⎤ + ⋅⎣ ⎦∂

⎧ ⎫ ⎧ ⎫⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪⎡ ⎤+ − =⎨ ⎬ ⎨ ⎬⎣ ⎦⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪⎩ ⎭ ⎩ ⎭

∑∑∑ ∑∫

∑ ∑∑ c

i

i ii

xy

xx

x

( )( ) ( ) ( )

( )( ) ( ) ( ) ( )

, ,0

1 1 1

1 1

lim , exp

exp , ,

N Mc c

N M N

s n m msn mC C

N M

m m mm

d i G t t y t s i x t

i G t t y t s y t dx dy

ν ν νν

ν νν

υ υ

υ

∞ ∞→

= = =×

= =

⎡ ⎤ ⎧ ⎫⎪ ⎪⎪ ⎪⎢ ⎥= ⋅ + ⋅⎨ ⎬⎢ ⎥ ⎪ ⎪⎪ ⎪⎣ ⎦ ⎩ ⎭

⎧ ⎫⎪ ⎪⎪ ⎪⎡ ⎤⋅ + − =⎨ ⎬⎣ ⎦⎪ ⎪⎪ ⎪⎩ ⎭

∑∑ ∑∫

∑∑ c xy

x

x

Page 167: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.3. REDUCTION OF THE HOPF-TYPE FUNCTIONAL DIFFERENTIAL EQUATIONS FOR SPECIAL CASES 155

( )( ) ( ) ( )

( ) ( )

, ,0

1 1

1

lim exp ,

exp ,

N Mc c

M N

s m m msmC C

N

d i G t t y t s y t

i x t dx dy

ν νν

ν νν

υ

υ

∞ ∞→

= =×

=

⎧ ⎫⎪ ⎪⎪ ⎪⎡ ⎤= + − ⋅⎨ ⎬⎣ ⎦⎪ ⎪⎪ ⎪⎩ ⎭

⎧ ⎫⎪ ⎪⎪ ⎪⋅ ⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

∑∑∫

∑ c xy

x

Since the stochastic process ( ); : MI×Ω→i iy has independent increments, and the quantity

( )1

expN

i x tν νν

υ=

⎧ ⎫⎪ ⎪⎪ ⎪⋅ ⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑ depends only to prior increments, the last expression will be equal with

( )( ) ( ) ( ) ( )

( ) ( )

, ,0

1 1

1

lim exp ,

exp

N Mc c

M N

s m m msmC C

N

d i G t t y t s y t dy

i x t dx

ν νν

ν νν

υ

υ

∞ ∞→

= =×

=

⎧ ⎫⎪ ⎪⎪ ⎪⎡ ⎤+ − ⋅⎨ ⎬⎣ ⎦⎪ ⎪⎪ ⎪⎩ ⎭

⎧ ⎫⎪ ⎪⎪ ⎪⋅ =⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

∑∑∫

c

c

y

x

x

( )( ) ( ) ( ),

01 1

lim , , ; exps

Nc

N N

s msC

d t G t t s i x t dxν ν ν νν ν

φ υ υ∞

Δ→= =

⎛ ⎞ ⎧ ⎫⎪ ⎪⎪ ⎪⎟⎜= =⎟ ⎨ ⎬⎜ ⎟⎜ ⎟ ⎪ ⎪⎝ ⎠ ⎪ ⎪⎩ ⎭∑ ∑∫ c xy

x

( )( )( ) ( )

,

1

01

, , ;lim exp

s

Nc

N

m N

sC

t G t t sds i x t dx

s

ν νν

ν νν

φ υυ

Δ=

→=

⎛ ⎞⎟⎜∂ ⎟⎜ ⎟⎜ ⎟ ⎧ ⎫⎪ ⎪⎝ ⎠ ⎪ ⎪= ⎨ ⎬⎪ ⎪∂ ⎪ ⎪⎩ ⎭

∑∑∫ c

y

x

x

As a result we have the desired equation for the characteristic function

( )( ) ( ) 1

1 0

,,0 ,

N Ll ln

nln l

tF t t

t iφ υ

φ+= =

∂= ∇ ∇ +

∂ ∑∑xx xυ

υυ

( )( )( ) ( )

,

1

01

, , ;lim exp

s

Nc

N

m N

sC

t G t t si x t dx

s

ν νν

ν νν

φ υυ

Δ=

→=

⎛ ⎞⎟⎜∂ ⎟⎜ ⎟⎜ ⎟ ⎧ ⎫⎪ ⎪⎝ ⎠ ⎪ ⎪+ ⎨ ⎬⎪ ⎪∂ ⎪ ⎪⎩ ⎭

∑∑∫ c

y

x

x

Remark 3.11 : We must note that the essential property for the proof of the Fokker-Planck equation is the independent increment property of the excitation, since we can decouple the probabilistic structures of the excitation and the response at the last term of the functional equation. Hence we can write equations for the response without taking into account the correlation structure of the two stochastic processes. This is not the case, when the stochastic excitation does not posses the independent increment property. In this case we must consider the correlation structure of the problem.

Remark 3.12 : As we mention at the beginning the above proof of the Fokker-Plank equation is based only on the

Remark 3.13 : The limit procedure with respect to s is carried out to take into account the discontinuity of the excitation.

Corollary 3.14 : Let the dynamical system Σ2 described by (31)-(32) of Section 5.2.4, with the excitation process being the standard Wiener process ( );t βW t I∈ with intensity vector

( )tWd , t I∈ , i.e., with characteristic function of increments

Page 168: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

156 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( ) ( )21 ,

1

1; , , exp2s

M

M i W ii

t y y y d t sφΔ

=

⎧ ⎫⎛ ⎞⎪ ⎪⎪ ⎪⎟⎜= − ⎟⎨ ⎬⎜ ⎟⎜ ⎟⎪ ⎪⎝ ⎠⎪ ⎪⎩ ⎭∑…

y

then eq. (26) takes the form

( )( ) ( )

( )( ) ( ) ( )1 2 1 2

2 1

1 2

1 22 1 1

2

11 0

,1 1 0 ,1 0

,,0 ,

,0 ,0 ,12 N

N Ll ln

nln l

k k k kN M Km m

W mm k I

k kk

tF t t

t iG t G t t

d t ν νν ν

νν

φ υφ

φυ υ

+= =

+

= = = ∈= = =

=

∂= ∇ ∇ +

∂ ∂ ∂− ⋅ ⋅

∂ ∂ ∂

∑∑

∑ ∑∑ ∑

xx x

x

i j i j i+ jij

x x

υ

υυ

υυ

(28)

where ( ), : NIφ × →i ix is the characteristic function of the system response ( );t βx .

Proof : By direct calculation we have

( )( )( ) ( )

( ) ( )( ) ( ) ( )

( ) ( )( )

,

,

1

01

2

,1 1 1

2

,1 1

, , ;lim exp

1 , exp2

1 ,2

s

Nc

Nc

N

m N

sC

M N N

W m mmC

M N

W m mm

t G t t si x t d

s

d t G t t i x t d

d t G t t

ν νν

ν νν

ν ν ν νν ν

ν νν

φ υυ

υ υ

υ

Δ=

→=

= = =

= =

⎛ ⎞⎟⎜∂ ⎟⎜ ⎟⎜ ⎟ ⎧ ⎫⎪ ⎪⎝ ⎠ ⎪ ⎪ =⎨ ⎬⎪ ⎪∂ ⎪ ⎪⎩ ⎭

⎛ ⎞ ⎧ ⎫⎪ ⎪⎪ ⎪⎟⎜= − =⎟ ⎨ ⎬⎜ ⎟⎜ ⎟ ⎪ ⎪⎝ ⎠ ⎪ ⎪⎩ ⎭

⎛ ⎞⎟⎜=− ⎟⎜ ⎟⎜ ⎟⎝ ⎠

∑∑∫

∑ ∑ ∑∫

∑ ∑

c

c

y

x

x

xx

x x

x ( ) ( )

( ) ( )( ) ( )( ) ( ) ( )

,

1 2 2 1,1

2

1

,1 1 1

1

exp

1 , , exp2

Nc

Nc

N

C

M N N

W m m mm C

i x t d

d t G t t G t t i x t d

ν νν

ν ν ν ν ν νν νν

υ

υ υ υ

=

= = ==

⎧ ⎫⎪ ⎪⎪ ⎪ =⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

⎧ ⎫⎪ ⎪⎪ ⎪=− =⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

∑∫

∑ ∑ ∑∫

c

c

x

x

x

x x x

( ) ( )( ) ( )( )( )

( )1

1 2 2 1,1

2

,1 11

1 , ,2

N

Nc

i x tN M

W m m mmC

d t G t t G t t e dν ν

ν

υ

ν ν ν ννν

υ υ =

⎧ ⎫⎪ ⎪⎪ ⎪⎪ ⎪⎨ ⎬⎪ ⎪⎪ ⎪⎪ ⎪⎩ ⎭

= ==

∑=− ∑ ∑∫ c xx x x (29)

Using the the component wise form the Taylor expansion of ( )( ),mG t tν x we have

( )( ) ( )

( ) ( )

( )( ) ( )

2 1

1 2

, 2 1

2 1

2 12 11 2

2 12 1

2 1

,1 0 0 1

,1 0 0

,0 ,0exp

,0 ,0exp

N NNc

N N

k kM K K Nm m

W mm k kI IC

k k

k kk kM K Km m

W m k km k kI I

k k

G t G td t i x t d

G t G td t i x

i i

ν νν ν

ν

ν νν

υ∞ = = = =∈ ∈

= =

= = =∈ ∈= =

⎧ ⎫∂ ∂ ⎪ ⎪⎪ ⎪ =⎨ ⎬⎪ ⎪∂ ∂ ⎪ ⎪⎩ ⎭

∂ ∂∂ ∂=

∂ ∂ ∂ ∂

∑ ∑ ∑ ∑∑ ∑∫

∑ ∑∑ ∑∑

cj ix

j ij ij i

j ij j i ij i

x x xx x

x xυ υ( ) ( )

( )( ) ( ) ( )

,

1 2 1 22 1

1 221 1

2

1

,1 0 ,

0

,0 ,0 ,1

Nc

N

N

C

k k k kM Km m

W m k km k I

k kk

t d

G t G t td t

i

νν

ν ν

υ

φ

∞ =

+

+= = ∈

= ==

⎧ ⎫⎪ ⎪⎪ ⎪ =⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

∂ ∂ ∂=

∂ ∂ ∂

∑∫

∑∑ ∑

c x

x

i j i j i+ jij

x

x xυ

υ

Hence we have proved the desired equation.

Page 169: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.3. REDUCTION OF THE HOPF-TYPE FUNCTIONAL DIFFERENTIAL EQUATIONS FOR SPECIAL CASES 157

Remark 3.15 : Taking the Fourier transform of the equation (28) we clearly have for the l.h.s. and r.h.s. term that

( ) ( )1 , ,t f tt t

φ−⎡ ⎤∂ ∂⎢ ⎥ =⎢ ⎥∂ ∂⎣ ⎦

x x xυF

( ) ( )

( ) ( )

11

1 0

1

,0 ,

, , , ,

N Ll ln

nln l

NN

nn n

F t ti

f t F t t Ix

υφ−

+= =

=

⎡ ⎤⎢ ⎥∇ ∇ =⎢ ⎥⎣ ⎦

∂ ⎡ ⎤=− ∈ ∈⎣ ⎦∂

∑∑

x x

x x x x

υ υF

For the second r.h.s. term we have from (29)

( )( )( ) ( )

,

11

01

, , ;lim exp

s

Nc

N

m N

sC

t G t t si x t d

s

ν νν

ν νν

φ υυ

Δ=−

→=

⎡ ⎤⎛ ⎞⎟⎜⎢ ⎥∂ ⎟⎜ ⎟⎢ ⎥⎜ ⎟ ⎧ ⎫⎪ ⎪⎝ ⎠ ⎪ ⎪⎢ ⎥ =⎨ ⎬⎢ ⎥⎪ ⎪∂ ⎪ ⎪⎩ ⎭⎢ ⎥⎢ ⎥⎣ ⎦

∑∑∫ c

y

x

xxF

( ) ( )( ) ( )( )( )

( )1

1 2 2 12,1

2

1,

1 11

1 , ,2

N

N

i x tN M

W m m mm

d t G t t G t t e dν ν

ν

υ

ν ν ν ννν

υ υ =

⎧ ⎫⎪ ⎪⎪ ⎪⎪ ⎪⎨ ⎬⎪ ⎪⎪ ⎪− ⎪ ⎪⎩ ⎭

= ==

⎡ ⎤∑⎢ ⎥

⎢ ⎥=− =⎢ ⎥⎢ ⎥⎣ ⎦

∑ ∑∫ xx x x_

cF

( ) ( ) ( ) ( )1 1 22

,11

1 , , ,2

NT

W mt t t f tx xν ν ν

ν==

∂ ∂ ⎡ ⎤= ⎢ ⎥⎣ ⎦∂ ∂∑ xG x d G x x

where ( )( )

( )

,1

,

,

0

0

W

W m

W M

d tt

d t

⎛ ⎞⎟⎜ ⎟⎜ ⎟⎜ ⎟=⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎟⎜⎝ ⎠

…d is the intensity matrix of the Wiener process ( );t βW

t I∈ . Hence we have proved the Fokker-Planck equation for the probability density function

( )( ) ( ) ( ) ( ) ( ) ( )

1 1 22

,1 1

1

, 1, , , , , 02

N NT

n W mn n

f tf t F t t t t f t

t x x xν ν νν

= ==

∂ ∂ ∂ ∂ ⎡ ⎤⎡ ⎤+ − =⎢ ⎥⎣ ⎦ ⎣ ⎦∂ ∂ ∂ ∂∑ ∑xx x

xx x G x d G x x

for ( ), Nt I∈ ×x .

Example 3.16 [One-dimensional case] : Let us now consider a specific case of a one dimensional system excited with from a Wiener stochastic process ( );t βW t I∈ with constant intensity Wd and coefficients independent of time. The Fokker-Planck equation takes the form

( )

( ) ( ) ( ) ( )2

22

, 1, , 02

xW

f t xf t x F x d G x f t x

t x x∂ ∂ ∂ ⎡ ⎤⎡ ⎤+ − =⎢ ⎥⎣ ⎦ ⎣ ⎦∂ ∂ ∂x x (30)

Using the standard method of separation of variables we assume that

( ) ( ) ( ),xf t x x T t=Λ (31)

where Λ and T are functions of x and t , respectively. Substitution of (31) into (30) gives the two ordinary differential equation

Page 170: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

158 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( )

21 dTT t dt

λ=− (32)

( ) ( ) ( ) ( ) ( )2

2 22

1 02 Wd G x x F x x x

x xλ

∂ ∂⎡ ⎤ ⎡ ⎤Λ − Λ + Λ =⎢ ⎥ ⎣ ⎦⎣ ⎦∂ ∂ (33)

Equation (32) is a simple first order equation with solution

( ) 2expT t tλ= − (34)

while equation (33) is a second order equation with variable coefficients. In general, the solution ( ), , ,x Α Β λΛ depends on two arbitrary constants Α and Β . If the eigenvalues of

equation (33) are discrete and distinct, then by virus of linearity of the problem the solution can be represented symbolically in the form

( ) ( ) 2

0

, , , , expn n n nn

f t x x tΑ Β λ λ∞

=

= Λ −∑x (35)

where ,n nΑ Β and nλ are determined from the initial and boundary conditions.

In general case of variable coefficients ( )2G x and ( )F x the eigenfunction-eigenvalue problem

(33) is not easy to solve. Only in restricted numbers of cases (special form of ( )2G x and ( )F x )

a solution can be determined in a closed form – in terms of special functions. The solution of the Fokker-Planck equation can be then expresses in terms of the eigenfunctions ( )n xΛ .

For example, if equation (33) is supplemented by the absorbing boundary condition ( ), 0f t a± =x and the initial condition is ( ) ( )0 0,f t x t tδ= −x then the solution is represented as

( )( ) ( )

( )( ) 0 2

00

, expn nn

n

x xf t x t t

f xλ

=

Λ Λ= − −∑x

x,st

(36)

where ( )f xx,st is the stationary solution of equation (30) (which will determined in the sequel)

and the functions ( )n xΛ are orthonormal with the weight function ( )1

f x−⎡ ⎤

⎣ ⎦x,st , i.e.,

( )( ) ( )

1,10,

a

n ma

n mx x dx

n mf x−

⎧ =⎪⎪Λ Λ =⎨⎪ ≠⎪⎩∫

x,st

The above form of solution is often used in applications.

Now let us determine the stationary solution probability density function for the one dimensional case described above. Note that for the existence of stationary solutions, certain conditions must be fulfilled for coefficients ( )2G x and ( )F x . These conditions are described

extensively for the one dimensional case, as well as, for the general case at SOBCZYK, K., (Stochastic Differential Equations. p.154). Now consider the one-dimensional case. Assuming that a stationary solution exists, then since ( ) ( )lim ,

tf x f t x

→∞=x,st x , the stationary solution does

not depend on time and the initial density ( )0f x . The Fokker-Planck equation takes the form

Page 171: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.3. REDUCTION OF THE HOPF-TYPE FUNCTIONAL DIFFERENTIAL EQUATIONS FOR SPECIAL CASES 159

( ) ( ) ( ) ( )2

2, ,2

1 02 Wf x F x d G x f x

x x∂ ∂ ⎡ ⎤⎡ ⎤− =⎢ ⎥⎣ ⎦ ⎣ ⎦∂ ∂x st x st (37)

or, after integration with respect to x

( ) ( ) ( ) ( )2, , 02 2W

d d G x f x f x F xdx

Φ⎡ ⎤ ⎡ ⎤− =−⎢ ⎥ ⎣ ⎦⎣ ⎦x st x st (38)

where 0Φ is a constant. The last equation is first order inhomogeneous equation for function

( ) ( ) ( )2,Wh x d G x f x= x st ,

( ) ( )

( )( ) 022 2

W

dh x F xh x

dx d G xΦ− =− (39)

Hence, the general solution for ( ),f xx st is

( )( )

( )( ) ( )

( )( )

0, 2 2 2 2

22 2exp exp

yx x

x x xW W W W

F FCf x d d dyd G x d G d G x d G

Φξ ξξ ξ

ξ ξ′ ′ ′

⎧ ⎫ ⎧ ⎫⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪= −⎨ ⎬ ⎨ ⎬⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪⎩ ⎭ ⎩ ⎭∫ ∫ ∫x st (40)

where the constant C is determined by the normalization condition, whereas the constant 0Φ is

found from the boundary conditions; as lower integration limit x′ one can take an arbitrary point from the state space of the process.

If 0 0Φ = equation (37) gives

( ) ( ) ( ) ( )2, , 02 2W

d d G x f x f x F xdx

Φ⎡ ⎤ ⎡ ⎤− =−⎢ ⎥ ⎣ ⎦⎣ ⎦x st x st (41)

with the following general solution

( )( )

( )( ), 2 2

2exp

x

xW W

FCf x dd G x d G

ξξ

ξ′

⎧ ⎫⎪ ⎪⎪ ⎪⎪ ⎪= ⎨ ⎬⎪ ⎪⎪ ⎪⎪ ⎪⎩ ⎭∫x st (42)

The above formula is often used in applications. More general formulas describing the probability density function of Hamiltonian Dynamical Systems with Wiener excitation and/or white noise parametric uncertainty are of special interest. For results of this type, as well as the necessary conditions for their existence we refer to SOIZE, C., (The Fokker-Planck Equation and its Explicit Steady State Solutions) and to POLIDORI, D.C. & BECK, J.L. & PAPADIMITRIOU, C., (A New Stationary PDF Approximation for Non-Linear Oscillators), MOSBAH, H. & FOGLI, M., (An Original Approximate Method for Estimating the Invariant Probability Distribution of a Large Class of Multi-Dimensional Nonlinear Stochastic Oscillators). For other interesting results concerning systems with random excitation of special type we refer to DI PAOLA, M. & SOFI, A., (Approximate Solution of the Fokker-Planck-Kolmogorov Equation) and GUO-KANG ER., (Exponential Closure Method for some Randomly Excited Non-Linear Systems).

Page 172: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

160 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

5.3.4. Partial Differential Equation for (N+M)-dimensional joint (excitation-response) characteristic functions in the case of general m.s.-Continuous Excitation In this section we shall derive a partial differential equation for the joint characteristic function of the excitation and the response when the excitation is m.s.-continuous stochastic process. In contrast with the previous case of an independent increment process where the randomness of the excitation ‘regenerates’ every instant time, in this case the stochasticity evolves in a smoother way, as a result of the finite correlation time of the excitation. This means that,

although the response ( );x t β on the past history of the excitation 0

;t

ty s β⎛ ⎞⎟⎜ ⎟⎜ ⎟⎜⎝ ⎠

(Principle of

Causality), the correlation between ( );x t β and ( );y t ε β+ , with 0ε> may not be zero, since

( ), 0yyC t tε+ ≠ .

Because of the last property of the excitation process, i.e. finite correlation time we cannot use the same arguments as for the previous case of independent increment process. Our analysis will be a generalization of Liouville equation. We shall use the same style of presentation as before, i.e., first a special case will be considered and then the general problem will be addressed. Let us consider a first order equation with cubic nonlinearity

( ) ( ) ( )3; ; ;x t ax t y tβ β β+= a.s. 0t t≥ (43)

( ) ( )0 0;x t xβ β= , a.s. (44)

where ( ); :y I×Ω→i i is a m.s. continuous stochastic process defined onto the probability

space ( )( ), ,Ω ΩU c with the induced probability space ( )( ), ,c cC C∞ ∞U c y , and

characteristic functional ( )exp ,c

y

C

i y v dy∞

= ∫Y cy . The initial condition ( )0 :x Ω→i is

described by a random variable with the induced probability space ( )( )0

, , xU c and

characteristic function ( )0 aY , a ∈ . By Theorem 2.1 we know that the joint characteristic

functional satisfies

( )( )

( )( )

( )( )

3

3

, , ,xy xy xyu v u v u vd adt u t v tu t

δ δ δ

δ δδ

⎡ ⎤⎢ ⎥ =− +⎢ ⎥⎢ ⎥⎣ ⎦

Y Y Y , , cu v C∞∈ , 0t t≥ (45)

( ) ( )0 0,0 ,a a aδ⋅ = ∈Y Y (46)

– Derivation of a P.D.E. for the joint characteristic function ( ), ; ,xy t v tφ υ We shall now derive a partial differential equation for the characteristic function. For the l.h.s. term of (45) we clearly have

( )( )

( ) ( )

( ) ( )

,exp , , ,

exp , , ,

c c

c c

xyxy

C C

xy

C C

u vd d ix t i x u i y v dx dydt u t dt

dx ti i x u i y v dx dy

dt

δ

δ∞ ∞

∞ ∞

×

×

⎛ ⎞⎟⎜ ⎟⎜ = + =⎟⎜ ⎟⎟⎜⎝ ⎠

= +

∫∫

∫∫

Yc

c

Page 173: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.3. REDUCTION OF THE HOPF-TYPE FUNCTIONAL DIFFERENTIAL EQUATIONS FOR SPECIAL CASES 161

Taking the r.h.s. of the last equality and substituting ( ) ( )u tυ δ= ⋅ −i i and , ( ) ( )v v sδ= ⋅ −i i ,vυ ∈ we obtain

( ) ( )

( )( ) ( ) ( )

exp , , ,

exp ,

c c

c c

xy

C C

xy

C C

dx ti i x u i y v dx dy

dt

dx ti i x t ivy s dx dy

dtυ

∞ ∞

∞ ∞

×

×

+ =

= + =

∫∫

∫∫

c

c

( ) ( ) ( )

( )

( )

1 exp ,

1 exp , ; ,

, ; ,1

c c

xy

C C

xy

xy

d i x t ivy s dx dydt

d i x ivy f x t y s dxdydt

t v st

υυ

υυ

φ υ

υ

∞ ∞×

∞ ∞

−∞−∞

⎡ ⎤⎢ ⎥= + =⎢ ⎥⎢ ⎥⎣ ⎦

= + =

∂=

∫∫

∫ ∫

c

For the first r.h.s. term we have,

( ) ( )( )( )

( ) ( ) ( ) ( )

( )

33

3

3

3

,exp ,

, ; ,c c

xy

C C

xy

t v six t i x t ivy s dx dy

u t

t v s

δ υ δ δυ

δ

φ υ

υ

∞ ∞×

⋅ − ⋅ −⎡ ⎤= + =⎣ ⎦

∂=

∫∫i iY

c

Finally for the second r.h.s. term we have

( ) ( )( )( )

( ) ( ) ( ) ( ),

exp ,c c

xy

C C

t v siy t i x t ivy s dx dy

v tδ υ δ δ

υδ ∞ ∞

⋅ − ⋅ −= +∫ ∫

i iYc

Taking the limit s t→ and since the excitation has continuous paths we will have

( ) ( ) ( ) ( )

( )( )

exp ,

, ; ,exp , ; ,

c c

xy

C C

xyxy

iy t i x t ivy t dx dy

t v siy i x ivy f x t y t dxdy

v

υ

φ υυ

∞ ∞×

∞ ∞

−∞−∞

+ =

∂= + =

∫∫

∫ ∫

c

Thus we have proved the following partial differential equation involving the joint characteristic function,

( ) ( ) ( )3

3

, ; ; , ; , , ; ,, , ,xy xy xy

s t

t v s t v t t v ta v t T

t vφ υ φ υ φ υ

υ υ υυ

=

∂ ∂ ∂+ = ∈ ∈

∂ ∂ ∂ (47)

with the additional condition

( ) ( ) ( )( )0, ; , , ,xy yt v s v s v s vφ φ δ= = ⋅ − ∈iY y (48)

and the initial condition

( ) ( ) ( )0,0;0, ,0 ,xy xsφ υ φ υ υ υ= = ∈Y

Page 174: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

162 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

We note that the partial differential equation for the joint characteristic function is of the Liouville’s equation form with additional condition for the marginal ( ),y v sφ . In other words,

we treat the excitation term ( );y t β as a state space variable witch doe’s not participate to the

dynamical part of the equation. Instead of that, the additional condition (48) is fulfilled.

– Derivation of moment equations for eq. (43) based on eq. (47) It is worth noticing that equation (47) is equivalent with the infinite set of moment equations

( ) ( )( )

( ) ( )( ) ( ) ( )( )

1

3 1

; ;11

; ; ; ;

n m

s t

n m n m

dE x t y s

n dt

a E x t y s E x t y s

β

β β

β β

β β β β

+

=

+ +

⎡ ⎤ ⎡ ⎤⎣ ⎦ ⎣ ⎦

⋅ =+

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= +⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

(49)

for 0,1,n = … and 0,1,m = … . The proof of the above set of moment equation for eq. (43) can be found at BERAN M.J., (Statistical Continuum Mechanics). Let us now derive the above set using directly eq. (47). From Chapter I we clearly have the equation connecting the characteristic function and the moments of a stochastic process. Hence, for the examined case the suitable form of this equation will be

( )

( ) ( )( )00

, ; ,; ;

n mn mxy n m

n m

t v si E x t y s

υν

φ υβ β

υ

++

==

∂ ⎡ ⎤ ⎡ ⎤= ⋅⎣ ⎦ ⎣ ⎦∂ ∂, , 0,1,n m= … (50)

Moreover we have,

( )( ) ( )( )

( )( ) ( )

1

00

1

, ; ,1 ; ;

;; ;

n mn mxy

n m n m

n m

t v s d E x t y si t v dt

dx tn E x t y s

dt

β

υν

β

φ υβ β

υ

ββ β

+ +

+==

∂ ⎡ ⎤ ⎡ ⎤⋅ = ⋅ =⎣ ⎦ ⎣ ⎦∂ ∂ ∂

⎛ ⎞⎟⎜ ⎡ ⎤ ⎡ ⎤ ⎟= ⋅⎜ ⎟⎣ ⎦ ⎣ ⎦⎜ ⎟⎜⎝ ⎠

, , 0,1,n m = … (51)

Now, taking n–partial derivatives with respect to υ and m–partial derivatives with respect to ν to eq (47) and using the two last equations (50) and (51) we obtain the set of moment equations (49).

– Derivation of a P.D.E. for the joint density function ( ), ; ,xyf x t y t An alternative form of equation (47) can be results by applying the Fourier transform to the above equations. Before that, we recall some useful relations connecting the probability density function and the characteristic function of stochastic processes written in a suitable form for our study. Thus, based on simple properties of the Fourier transform, we have

Page 175: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.3. REDUCTION OF THE HOPF-TYPE FUNCTIONAL DIFFERENTIAL EQUATIONS FOR SPECIAL CASES 163

( )

( )1, ; ,, ; ,xy

xy

f x t y siu u t v s

xφ−∂ ⎡ ⎤= ⎢ ⎥⎣ ⎦∂

F

( )

( ), ; ,

, ; ,xyxy

u t v sixf x t y s

uφ∂ ⎡ ⎤= ⎢ ⎥⎣ ⎦∂

F

Based on the above equations we can easily prove the more general

( ) ( )

( )1 1, ; , , ; ,1 , ; ,n nm n m

xy xym nxyn m m n m m

u t v s u t v s iu x f x t y su i x u i x

φ φ− −⎡ ⎤ ⎡ ⎤∂ ∂∂ ∂⎢ ⎥ ⎢ ⎥ ⎡ ⎤= = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦∂ ∂ ∂ ∂⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

F F

By direct application of the Fourier transform ant using the last equation we have

( )

( ) ( )3, ; ,, ; , , ; , 0 , , ,xy

xy xy

s t

f x t y sax f x t y t y f x t y t x y t I

t x=

∂ ∂ ⎡ ⎤+ + = ∈ ∈⎢ ⎥⎣ ⎦∂ ∂ (52)

( ) ( ) ( )( ), ; , , ,i yxy yf x t y s dx f y s s d ye ν ν δ ν

∞ ∞

−∞ −∞

= = ⋅ − ∈∫ ∫ iyY (53)

( ) ( ) ( )0,0; , ,0 ,i xxy xf x y s dy f x d xe υ υ υ

∞ ∞

−∞ −∞

= = ∈∫ ∫ Y (54)

Thus, the l.h.s. of equation (52), which is equivalent with the l.h.s. of Liouville equation express the conservation of ‘probability’ for the state space variable ( );x t β . The r.h.s term,

which is due to the excitation term, express the transfer-interaction of ‘probability’ from the stochastic process ( );y t β to the state space variables ( );x t β . Of course the stochastic

characteristics of the excitation ( );t βy are introduced with condition (53).

– Numerical scheme for the solution of eq. (52) For the case of a first order equation with cubic non-linearity we have proved at the beginning of the section the following partial differential equation involving the joint characteristic function,

( ) ( ) ( )3

3

, ; , , ; , , ; ,, , ,xy xy xy

s t

t v s t v t t v ta v t T

t vφ υ φ υ φ υ

υ υ υυ

=

∂ ∂ ∂+ = ∈ ∈

∂ ∂ ∂

with the additional condition

( ) ( ) ( )( )0, ; , , ,xy yt v s v s v s vφ φ δ= = ⋅ − ∈iY y

and the initial condition

( ) ( ) ( )0,0;0, ,0 ,xy xsφ υ φ υ υ υ= = ∈Y

Taking the Fourier transform of the above equations we have the equivalent set of equations for the joint probability density function,

Page 176: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

164 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( )

( ) ( )3, ; ,, ; , , ; , 0 , , ,xy

xy xy

s t

f x t y sax f x t y t y f x t y t x y t T

t x=

∂ ∂ ⎡ ⎤+ + = ∈ ∈⎢ ⎥⎣ ⎦∂ ∂ (55)

( ) ( ) ( )( ), ; , , ,i yxy yf x t y s dx f y s s d ye ν ν δ ν

∞ ∞

−∞ −∞

= = ⋅ − ∈∫ ∫ iyY (56)

( ) ( ) ( )0,0; , ,0 ,i xxy xf x y s dy f x d xe υ υ υ

∞ ∞

−∞ −∞

= = ∈∫ ∫ Y (57)

For the numerical solution of the above problem we will use a combination of a bivariate copula function, to represent the correlation structure of the excitation and the response at time t, and a kernel density function representation for the marginal ( ),xf x t . We briefly, recall that

a bivariate copula function is a distribution function that has given marginals. For more information we refer to JOE, H., (Multivariate Models and Dependence Concepts). For this example we will use the Plackett copula function that gives a joint density function representation at time t,

( ) ( ) ( ) ( )( ) ( ) ( ), ; , , , , ; , ,xy x y x yf x t y t C F x t F y t t f x t f y tψ= ⋅ ⋅ (58)

with,

( )( )( ) ( )( ) ( )( )

( ) ( )( )( ) ( ) ( )( ) ( )1 2 1 2

1 2 32 2

1 2 1 2

1 2 1, ; , 0

1 1 4 1

t t x x x xC x x t t

x x t t t x x

ψ ψψ ψ

ψ ψ ψ

⋅ − ⋅ + − += >

+ + − − −

(59)

Additionally we assume that the correlation structure between the response and the excitation, which is described by the function ( )tψ , doesn’t change considerable for the joint density

function ( ), ; ,xyf x t y s when the time difference t s− is sufficiently small. In this way we can

assume that for small t s− the following representation for the joint characteristic function holds

( ) ( ) ( ) ( )( ) ( ) ( ), ; , , , , ; , ,xy x y x yf x t y s C F x t F y s t f x t f y sψ= ⋅ ⋅ (60)

Hence, substituting representation (60) into eq. (55) we will have

( ) ( ) ( )

( )

( ) ( )

1

3

,, ,

, , 0 , , ,

xx x

x x

f x tC Cf x t t f x t Cx t

ax C f x t y C f x t x y t Ix

ψψ

⎧ ⎫⎛ ⎞⎪ ⎪∂∂ ∂ ⎟⎪ ⎪⎜ ′ ⎟+ + +⎜⎨ ⎬⎟⎜ ⎟⎜⎪ ⎪∂ ∂ ∂⎝ ⎠⎪ ⎪⎩ ⎭∂ ⎡ ⎤+ + = ∈ ∈⎢ ⎥⎣ ⎦∂

(61)

For the last equation the unknowns are the functions ( ),xf x t and ( )tψ . Now using a kernel

density representation for the marginal density function ( ),xf x t and multiplying and ingrate over x with each kernel we will have a set of integrodifferential equations for the parameters of the kernel density functions and the function ( )tψ . The above approach should be clearer at Section 5.4 where a kernel representation will considered at the level of characteristic functional. An alternative approach can be based on the kernel representation of the correlation structure, i.e., two dimensional kernels that include the correlation structure. Foe more details we refer to Section 5.4.

Page 177: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.3. REDUCTION OF THE HOPF-TYPE FUNCTIONAL DIFFERENTIAL EQUATIONS FOR SPECIAL CASES 165

– Derivation of a P.D.E. for the joint characteristic function ( )1 2 1 2, ; , ; , ; ,xxyy t t v t v tφ υ υ For many practical applications it is often need to know the joint probability of the response and its derivative. We shall show that for the case of the joint probability of the response and its derivative, we can easily have the desired density by a simple transformation of random variables ( ) ( ); , ;x t y tβ β using their joint density derived before. In contrast for higher order

derivatives another approach has to be considered. More precisely the joint characteristic function of the response, its derivative and the excitation can be easily derived using the equation governing the system,

( ) ( ) ( )3; ; ;x t ax t y tβ β β+= a.s. 0t t≥

For the joint probability of the second derivative we consider the following set of equations

( ) ( ) ( )

( ) ( ) ( ) ( )

3

2

; ; ;

; 3 ; ; ;

x t ax t y t

x t ax t x t y t

β β β

β β β β

= +

= + a.s. 0t t≥ (62)

where the second equations has been derived by direct differentiation of the first one. Now using same arguments as for the case of the derivation of the joint characteristic function

( ), ; ,xy t v tφ υ we have the equation

( ) ( ) ( )

( ) ( )

3 3

1 23 21 1 2

1 2 1 2 1 21 2

, , ,3

, ,, , , , ,

xxyy xxyy xxyy

s t

xxyy xxyy

t s t t t ta a

t

t t t tv v t T

v v

φ φ φυ υ

υ υ υ

φ φυ υ υ υ

=

∂ ∂ ∂+ + =

∂ ∂ ∂ ∂

∂ ∂= + ∈ ∈

∂ ∂

where for simplicity, we write the joint characteristic function ( )1 2 1 2, ; , ; , ; ,xxyy t t v s v sφ υ υ as

( ),xxyy t sφ .

Moreover the additional condition holds

( ) ( ) ( ) ( )( )1 2 1 2 1 2 1 20, ;0, ; , ; , , ; , , ,xxyy yyt t v s v s v s v s v s v s v vφ φ δ δ= = ⋅ − + ⋅ − ∈i iY y

and the initial condition

( ) ( ) ( )1 2 1 2 0 1 2 1 2,0; , 0;0, ;0, ,0; , 0 , , ,xxyy xxs sφ υ υ φ υ υ υ υ υ υ= = ∈Y

Similarly we can derive equations for joint characteristic functions of higher derivatives.

– Derivation of a P.D.E. for the joint ch. function ( )1 2 1 2, ; , ; , ; ,xxyy t t v t v tφ υ υ τ τ+ + For many applications it is often need to know the joint probability of the response for two distinct time instants. More precisely the joint characteristic function of the response at two distinct time instants t and t τ+ can be derived by considering the following set of equations

Page 178: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

166 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( ) ( ) ( )

( ) ( ) ( )

3

3

; ; ;

; ; ;

x t ax t y t

x t ax t y t

β β β

τ β τ β τ β

= +

+ = + + + a.s. 0t t≥ (63)

Now using same arguments as for the case of the derivation of the joint characteristic function ( ), ; ,xy t v tφ υ we have the equation

( ) ( ) ( )

( ) ( )

3 3

1 23 31 2

1 2 1 2 1 21 2

, , ,

, ,, , , , ,

xxyy xxyy xxyy

s t

xxyy xxyy

t s t t t ta a

t

t t t tv v t I

v v

φ φ φυ υ

υ υ

φ φυ υ υ υ

=

∂ ∂ ∂+ + =

∂ ∂ ∂

∂ ∂= + ∈ ∈

∂ ∂

where for simplicity, we write the joint characteristic function ( )1 2 1 2, ; , ; , ; ,xxyy t t v s v sφ υ υ τ τ+ +

as ( ),xxyy t sφ .

Moreover the additional condition holds

( ) ( )

( ) ( )( )1 2 1 2

1 2 1 2

0, ;0, ; , ; , , ; ,

, ,xxyy yyt t v s v s v s v s

v s v s v v

φ τ τ φ τ

δ δ τ

+ + = + =

= ⋅ − + ⋅ − − ∈i iY y

and the initial condition

( ) ( ) ( )1 2 1 2 0 1 2 1 2,0; , ;0, ;0, ,0; , , , ,xxyy xxs sφ υ υ τ τ φ υ υ τ υ υ υ υ+ = = ∈Y

Similarly we can derive equations for joint characteristic functions of higher derivatives.

– Derivation of a P.D.E. for the joint characteristic function ( ), ; ,t tφxy vυ for a general

system. For the general case we have the following

Theorem 3.17. : Let the dynamical system Σ1 described by (18)-(19) of Section 5.2.3. Then the FDE

( ) ( ) ( ) ( ) ( ) ( )0 00 0

1 1, , , ,K L

k l l ku t u t v tk l

k l

d ti dt i +

= =

⎡ ⎤∇ = ∇ ∇ ∇ ∇⎢ ⎥⎣ ⎦ ∑∑xy y x xyu v F x y u vY Y

( ) ( )0 0,0δ⋅ =Yxy υ υY , N∈υ

for every ( ),NcC I∞∈u , ( ),M

cC I∞∈v and t I∈ , which describes the system, can be reduced to

the partial integrodifferential equation

( )( ) ( )

( ) ( )

11 0

11 1 0

, ; ,,0,0 , ; ,

,0,0 , ; ,

N Ll ln

nln ls t

N K Lk l k ln

nk ln k l

t sF t t s

t i

F t t si ν

φ υφ

υφ

+= ==

+ −= = =

∂+ ∇ ∇ =

= ∇ ∇ ∇ ∇

∑∑

∑∑∑

xyx xy

y x xy

υ

υ

υ νυ ν

υ ν

(64)

with the additional condition,

Page 179: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.3. REDUCTION OF THE HOPF-TYPE FUNCTIONAL DIFFERENTIAL EQUATIONS FOR SPECIAL CASES 167

( ) ( ) ( )( ), ; , , ,t s s sφ φ δ= = ⋅ − ∈ixy y y0 Yν ν ν ν (65)

and the initial condition

( ) ( ) ( )00, ; , 0, ,sφ φ= = ∈xy x0 Yυ υ υ υ (66)

where ( ), ; , : N MI Iφ × × × →i i i ixy is the joint characteristic function of the system

response ( );t βx and the excitation ( );t βy .

Proof : We consider the component wise form of the FDE, used for the proof of Theorem 2.1. ( 1, 2, ,n N= … )

( )( )

( ) ( )( ) ( )( )0 0 ,

, ,,0 ,01 1N M

k lk lK Ln

k lk l I In

l k

F tdi dt u t i t tδ δ δ

δ δ ++

+= = ∈ ×

= ∧ =

∂= ⋅

∂ ∂∑∑ ∑xy xy

i j i j i ji j

u v u vx y u v

Y Y (67)

We separate the r.h.s. term as follows,

( ) ( )( )

( ) ( )( ) ( )( )0 1 0 ,

, ,,0,0 ,0,01 1N N M

l k ll k lL K Ln n

l k ll k lI I I

l l k

F t F ti t i t tδ δ δ

δ δ ++

+= = =∈ ∈ ×

= = ∧ =

∂ ∂⋅ + ⋅

∂ ∂ ∂∑ ∑ ∑∑ ∑xy xy

i i ji i i j i ji i j

u v u vx u x y u v

Y Y

Setting to the system of functional differential equations ( ) ( )1 2, , N tυ υ υ δ= ⋅ −… iu and

( ) ( )1 2, , M sν ν ν δ= ⋅ −… iv we have using same arguments as for the proof of Liouville’s

equation,

( ) ( )( )( )

,1lims t

n

t sdi dt u t

δ δ

δ

δ→

⋅ − ⋅ −=

i ixy υ νY

( ) ( ) ( ) ( ), , 1 1

1lim exp ,N M

c c

N M

n ns tn C C

i x t i x t i y s d di κ κ κ κ

κ κ

υ υ νυ ∞ ∞

→= =×

⎧ ⎫⎪ ⎪⎪ ⎪= +⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑ ∑∫ xy x yc

For the first r.h.s term we have,

( ) ( ) ( )( )( )0

,,0 ,01limN

llLn

ls tl I

l

t sF ti t

δ δ

δ

δ→

= ∈=

⋅ − ⋅ −∂⋅ =

∂∑ ∑i ixy

i i ii

x uυ νY

( ) ( )0

, ; ,,0 ,01N

llLn

ll I

l

t tF ti

φ

= ∈=

∂∂= ⋅

∂ ∂∑ ∑ xy

i i ii

xυ νυ

Similarly for the second r.h.s. term we have,

( ) ( ) ( )( )( ) ( )( )1 0 ,

,,0,01limN M

k lk lK Ln

k ls tk l I I

l k

t sF ti t t

δ δ

δ δ

δ ++

+→= = ∈ ×

= ∧ =

⋅ − ⋅ −∂⋅ =

∂ ∂∑∑ ∑i ixy

i j i j i ji j

x y u vυ νY

( ) ( )( )1 0 ,

, ; ,,0 ,01N M

k lk lK Ln

k lk l I I

l k

t tF ti

φ++

+= = ∈ ×

= ∧ =

∂∂= ⋅

∂ ∂ ∂ ∂∑∑ ∑ xy

i j i j i ji j

x yυ ν

υ ν

Page 180: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

168 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

Where we have used the notation,

1 21 2

Nii iNx x x∂ =∂ ⋅∂ ⋅∂…ix

1 21 2

Nii iNx x x= ⋅ ⋅ ⋅…ix

Multiplying all terms with niυ and summing up for all 1, 2, ,n N= … , we will have for the l.h.s. term,

( ) ( ) ( ) ( ), ,1 1 1

lim exp ,N M

c c

N N M

n ns tn C C

i x t i x t i y s d dκ κ κ κκ κ

υ υ ν∞ ∞

→= = =×

⎧ ⎫⎪ ⎪⎪ ⎪+ =⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑ ∑ ∑∫ xy x yc

( ) ( ), ; , , ; ,lims t

s t

t s t st t

φ φ→

=

∂ ∂= =

∂ ∂xy xyυ ν υ ν

Making the summation for the r.h.s. terms we have the partial differential equation,

( ) ( ) ( )1

1 0

, ; , , ; ,,0 ,0N

llN Lnn

ln l Is t

l

t s t tF tt i

φ φυ−

= = ∈==

∂ ∂∂= ⋅ +

∂ ∂ ∂∑∑ ∑xy xy

i i ii

xυ ν υ ν

υ

( ) ( )( )

11 1 0 ,

, ; ,,0 ,0N M

k lk lN K Lnn

k ln k l I I

l k

t tF ti

φυ ++

+ −= = = ∈ ×

= ∧ =

∂∂+ ⋅

∂ ∂ ∂ ∂∑∑∑ ∑ xy

i j i j i ji j

x yυ ν

υ ν

It is easy to verify conditions (65) and (66).

Remark 3.18 : The separation of the r.h.s. that we have made allows us to give the physical meaning for the above equation. Thus, the l.h.s. of equation (64), which is equivalent with the l.h.s. of Liouville equation express the conservation of ‘probability’ for the state space variables ( );t βx . The r.h.s term, which is due to the excitation term, express the transfer-interaction of

‘probability’ from the stochastic process ( );t βy to the state space variables ( );t βx . Of course

the stochastic characteristics of the excitation ( );t βy are introduced with condition (65).

Page 181: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 169

5.4. Kernel representations for the Characteristic Functional We will now study methods for the efficient numerical solution of the Functional Differential Equations described at Section 5.4.2 of the present chapter. These methods, in contrast to the previous section (based on the reduction of the FDE’s to PDE’s) will be based on analytical calculations of functional integrals (EGOROV, A.D. & SOBOLEVSKY, P.I. & YANOVICH, L.A., Functional Integrals: Approximate Evaluation and Applications) presented below. More specifically, using the linearity property of the Functional Differential Equations, we will apply a convex superposition of characteristic functionals (kernel characteristic functionals). Then two different approaches will be formulated. The first one will superpose kernel characteristic functionals with given parameters, efficiently chosen, such that the corresponding measures be local in space and time, i.e. in N . We will explain this property extensively in what follows (Section 5.4.3). Thus we will get a Gallerkin type approximation at functional level. The second approach will work in two stages. In the first stage we will approximate the initial condition distribution from a superposition of kernel characteristic functionals. In the second stage we will let the characteristic functionals to evolve in time with constants amplitudes (those that we computed). After an efficiently chosen time interval we will recompute the amplitudes, by approximating the computed probabilistic stage as it was an initial condition using suitable criteria.

5.4.1. Exact Formula for Integrals of Special Functionals a) Integration with respect to Independent Increment Processes We will present the above theorems for the more general case of X be a linear topological space (separable, locally convex linear space). X ′ will denote the dual space of linear continuous functionals on X . In what follows, we shall consider without any loss of generality the mean value of a Gaussian measure c to be equal to zero. In may cases, we may without any loss of generality consider the correlation operator of c , ( ),C x x to be nondegenerate, i.e.,

( ), 0C x x = if and only if 0x = (we have substitute the notation ,x xC of Chapter II, for the

current suction, with ( ),C x x to avoid confusions with the dual product). Otherwise, we could find the subspace 0X X⊂ where the measure is concentrated, and the correlation operator will no longer be degenerate there. The following construction is described and substained in papers (see EGOROV, A.D. & SOBOLEVSKY, P.I. & YANOVICH, L.A., Functional Integrals: Approximate Evaluation and Applications p.16) in more details. The Hilbert space which is the closure of the set of functionals of the form ,ξ i , Xξ ′∈ in space ( )2 ,X_ c will be denoted by H and X[ ∈

denotes the Hilbert subspace dual to H whose closure in X is the support of measure c . For almost all x X∈ , a functional ( ),a x ( ),a x X∈ ∈[ is defined. It is specified by the series

( ) ( )1

, , ,k kk

a x x a eφ∞

=

=∑ [ (1)

Page 182: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

170 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

where kφ , ke , 1, 2,k = …, are orthonormal bases in H and [ , respectively, with

k Xφ ′∈ for all k and ,k j kjeφ δ= . We assume that the space [ is seperable for all spaces

X and measures c considered. In particular, this is true for separable Frechet spaces and for other spaces considered here. Functionals of the form (1) are called measurable linear functionals. Note that

( ), ,k ke x xφ= , ( ) ( ) ( )1

, , , ,k kk

a h x a e a hφ∞

=

= =∑ [

for ,a h [∈ . It is an important property of the considered spaces with Gaussian measure that they admit the following expansion

( ) ( )1 1

, ,k k k kk k

x e x e x eφ∞ ∞

= =

= =∑ ∑ (2)

which converges under topology of space X for almost all x X∈ . Proofs of the convergence of expansion (2) may be found in papers. The definition of H implies that ( ) ( ) ( ), , , ,

Hx x dx Cξ η ξ η ξ η= =∫ c . Let T define an isomorphism of space [ into H

which assigns a basis kφ to the basis ke . Then

( ) ( ) ( )1 1

1 1

, , , , , ,k k k kHk k

x x T e x T xξ ξ φ φ ξ φ ξ∞ ∞

− −

= =

= = =∑ ∑ [ (3)

( ) ( ) ( ) ( ) ( ) ( )1 1 1 1

1 1

, , , , , ,k k k kH Hk k

C T e T e T Tξ η ξ φ η φ ξ η ξ η∞ ∞

− − − −

= =

= = =∑ ∑ [ [

For the proofs of the theorems presented below we will need the following transformation formulae for the integrals with respect to Gaussian measure under translation and under general linear transformation may be obtained. Thus, suppose ( )F x is a functional to be integrated and

a [∈ . Then the transformation formula for an integral with respect to Gaussian measure under translation is as follows

( ) ( ) ( ) ( ) ( )21exp exp ,2X X

F x dx a F x a a x dx⎧ ⎫⎪ ⎪⎪ ⎪= − + −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

∫ ∫[c c (4)

In order to state the next result, we need some definitions. A compact operator A in Hilbert

space is called a Hilbert-Schmidt operator, if 2

1k

k

λ∞

=

<∞∑ , where kλ ( )1, 2,k = … is the

sequence of its eigenvalues of A . On the other hand, if the series 1

kk

λ∞

=

<∞∑ converges then

this operator is called the operator of trace class. Clearly, any trace class operator is Hilbert-

Schmidt one. The product ( ) ( )1

1A kk

D λ λλ∞

=

= −∏ , where kλ ( )1, 2,k = … is the sequence of its

eigenvalues of A , is called the Fredholm determinant at point λ for operator A . The Fredholm determinant of a trace class operator is finite for any λ . It is an entire function of complex variable λ with zeroes at points 1

kλ λ−= ( )1, 2,k = … . If A is a Hilbert-Schmidt

Page 183: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 171

operator then the product ( )1

1 kk

λλ∞

=

−∏ may diverge. For such operators, the determinant of the

form ( ) ( )1

1 kA k

k

eλλδ λ λλ∞

=

= −∏ is introduced and is called the Carleman determinant.

Let us proceed now to the evaluation of integrals with respect to Gaussian measure for particular functionals. First we shall evaluate integrals of functionals which are functions of measurable linear functionals

Theorem 4.1 : Let 1, , na a… be linearly independent elements from [ . Then the following equality holds

( ) ( )( ) ( ) ( ) [ ] ( ) ( )/ 2 1/ 2 11

1, , , , 2 det exp ,2n

nn

X

F a x a x dx A A F dπ − − −⎧ ⎫⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∫ ∫… c u u u u (5)

if any of these integrals exists, where A is a matrix with elements ( ),ij i jA a a⎡ ⎤ =⎢ ⎥⎣ ⎦ [

( ), 1, 2, ,i j n= … .

Proof : We shall make use of formula (4) for the transformation of integrals under translation,

wherein we set 1

n

k kk

a aλ υ=

=− ∑ , with λ and kυ be real numbers, and ( ) 1F x = . Then we have

( ) ( ) ( )22 2

1 1

exp , exp exp ,2 2

n n

k k k kk kX

a x dx a Aλ λλ υ υ υ υ

= =

⎧ ⎫⎪ ⎪ ⎧ ⎫⎧ ⎫ ⎪ ⎪⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪ ⎪ ⎪= =⎨ ⎬ ⎨ ⎬ ⎨ ⎬⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎪ ⎪⎩ ⎭ ⎪ ⎪⎪ ⎪ ⎩ ⎭⎪ ⎪⎩ ⎭∑ ∑∫

[

c

This equality is also valid for iλ=− . This together with the following formula

( ) ( ) ( ) [ ] ( )/ 2 1/ 211 1exp , , 2 det exp ,2 2n

nA i d A Aπ−⎧ ⎫ ⎧ ⎫⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪− − = −⎨ ⎬ ⎨ ⎬⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪⎩ ⎭ ⎩ ⎭∫ u u u uυ υ υ

implies that

( ) ( ) ( ) [ ] ( ) ( )/ 2 1/ 2 ,1

1

1exp , 2 det exp ,2n

nn i

k kkX

i a x dx A A e dυ π − − −−

=

⎧ ⎫ ⎧ ⎫⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪− = −⎨ ⎬ ⎨ ⎬⎪ ⎪ ⎪ ⎪⎪ ⎪⎩ ⎭⎪ ⎪⎩ ⎭∑∫ ∫c uu u uυ

i.e. formula (5) is valid for ( ) ( ),iF e−= uu υ . Let now the function be specified by its Fourier

transform, i.e. ( ) ( ) ( ),

n

iF f e d−= ∫ uu υυ υ . Let us multiply the previous relation by ( )f υ and

integrate it over space n with respect to the variables 1 2, , , nυ υ υ… . After changing the order of integration we obtain (5). The validity of the formula for arbitrary functions ( )F u is verified

by passage to the limit.

Remark 4.2 : For the case of integration with respect to a Gaussian measure with non zero mean value m we have ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( ) ( )1 1 1, , , , , , , , , ,n n n

X X

F a x a x dx F a x a m a x a m dx= + + =∫ ∫… …c c

( ) [ ] ( ) ( ) ( )( )/ 2 1/ 2 11 1

12 det exp , , , , ,2n

nn nA A F u a m u a m dπ − − −⎧ ⎫⎪ ⎪⎪ ⎪= − + +⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

∫ …u u u

Page 184: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

172 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

where c , is the Gaussian measure c with zero mean value. Making the suitable translation transformation to the last integral we have

( ) ( )( ) ( )1, , , ,nX

F a x a x dx =∫ … c

( ) [ ] ( ) ( )( ) ( )/ 2 1/ 2 112 det exp , , ,2n

n A A m m F dπ − − −⎧ ⎫⎪ ⎪⎪ ⎪⎡ ⎤ ⎡ ⎤= − − −⎨ ⎬⎣ ⎦ ⎣ ⎦⎪ ⎪⎪ ⎪⎩ ⎭∫ u a u a u u (6)

Remark 4.3 : For case of X be a separable real Hilbert space, matrix A with ( ),ij i jA a a⎡ ⎤ =⎢ ⎥⎣ ⎦ [

will take the simple form ( ),ij i jA C a a⎡ ⎤ =⎢ ⎥⎣ ⎦ .

Let us mention two special cases of the obtained formula. For 1n = we have

( )( ) ( ) ( ) ( )2

1/ 2 12, 2 exp

2X

uF a x dx a F u dua

π − −⎧ ⎫⎪ ⎪⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎪ ⎪⎩ ⎭

∫ ∫[

[

c

Let 1 2, , , na a a… be orthogonal elements of space n . Then

( ) ( )( ) ( ) ( ) ( ) ( )/ 21

1, , , , 2 exp ,2n

nn

X

F a x a x dx F dπ − ⎧ ⎫⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∫ ∫… c u u u u

Consider now evaluation of integrals of functionals which are functions of quadratic functionals on X (for example the Tatarskii representation)

( ) ( )( ), 1

, , ,kj j kk j

A x x a e x e x∞

=

= ∑ (7)

where ( ),kj k ja Ae e= ; A is a self-adjoint trace class operator on [ ; , 1, 2, ,ke k = … is an

orthonormal basis in [ . The double series in (7) converges for almost all x X∈ . The functional ( ),A x x does not depend on the choice of basis ke and may be written as follows

( ) ( )2

, 1

, ,k kk j

A x x e xλ∞

=

= ∑

where kλ and ke are the eigenvalues and the orthonormal eigenvectors of operator A ,

respectively. The following integrals are easy to evaluate with the help of (5)

( ) ( ) [ ], traceX

A x x dx A=∫ c

( ) ( ) [ ]2 2 2, trace 2traceX

A x x dx A A⎡ ⎤= + ⎢ ⎥⎣ ⎦∫ c

Let us prove the equality

( ) ( ) ( )1/ 2

exp ,2 A

X

A x x dx Dλλ

−⎡ ⎤ ⎡ ⎤⎢ ⎥ = ⎣ ⎦⎢ ⎥⎣ ⎦∫ c

Page 185: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 173

where ( )AD λ is the Fredholm determinant of operator A at point λ ; 11Reλ λ−< ( 1λ is the

largest eigenvalue) and

( ) ( ) ( )1/ 2 1/ 2

exp2A A AiD D argDλ λ λ

− − ⎧ ⎫⎪ ⎪⎪ ⎪⎡ ⎤ = −⎨ ⎬⎣ ⎦ ⎪ ⎪⎪ ⎪⎩ ⎭

In fact,

( ) ( ) ( ) ( )2

, 1

exp , exp ,2 2 k k

k jX X

A x x dx e x dxλ λλ

=

⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ = =⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦∑∫ ∫c c

( ) ( ) ( ) ( )1/ 21/ 2 1/ 22

1 1

12 exp 1 12 k k A

k k

u du Dπ λλ λλ λ∞ ∞ −− −

= =

⎡ ⎤⎧ ⎫⎪ ⎪⎪ ⎪⎢ ⎥ ⎡ ⎤= − = − =⎨ ⎬⎢ ⎥ ⎣ ⎦⎪ ⎪⎪ ⎪⎩ ⎭⎢ ⎥⎣ ⎦∏ ∏∫

We may similarly evaluate an integral of the more general form

( ) ( ) ( ) ( ) ( )( )2

1/ 2exp , , 2 exp 2 ,

2A AX

bA x x b g x dx D B g gλ λ λ− ⎧ ⎫⎪ ⎪⎪ ⎪⎡ ⎤ ⎡ ⎤+ = ⎨ ⎬⎣ ⎦ ⎣ ⎦ ⎪ ⎪⎪ ⎪⎩ ⎭

∫ [c (8)

where g ∈[ , ( ) ( ) 12 1 2AB Aλ λ −= − , ( ) 1

1Re 2λ λ −< ( 1λ is the largest eigenvalue); b is a

numeric parameter; I is the identity operator. Now formula (8) will be exploited in the proof of a more general result.

Theorem 4.4 : Let ( ) ( )1 , , , ,nA x x A x x… be quadratic functionals of form (7) and let

1 2, , , mg g g… be linearly independent elements of space [ . Then the following equality holds

( ) ( ) ( ) ( )( ) ( ) ( ) ( )1 1, , , , ; , , , , ; ;n m

n mX

F A x x A x x g x g x dx p F d d+

=∫ ∫… … c u u uυ υ υ (9)

(if any of these integrals exists), where

( ) ( ) ( ) ( ) ( ) ( ) ( )( )1/ 2/ 2 11; 2 exp , 2 det exp ,2n

n mAp i D i S S dπ

−− − −⎧ ⎫⎪ ⎪⎪ ⎪⎡ ⎤⎡ ⎤= − −⎨ ⎬⎢ ⎥⎣ ⎦⎣ ⎦ ⎪ ⎪⎪ ⎪⎩ ⎭∫u u ξυ ξ ξ ξ υ υ ξ

( )S ξ is a matrix with elements ( ) ( )( )12 ,kj k jS I iA g g

−⎡ ⎤ ⎡ ⎤= +⎢ ⎥ ⎣ ⎦⎣ ⎦ [ξ ξ ; ( ) ( )

1/ 2

AD λ−⎡ ⎤

⎢ ⎥⎣ ⎦ξ is the

Fredholm determinant of the operator ( )1

n

k kk

A Aξ=

=∑ξ at point λ ; ( )1, , nu u= …u and

( )1, , mυ υ= …υ .

Proof : The proof of this theorem is similar to that of Theorem 4.1, i.e., first we prove the validity of formula (9) with the help of formula (8) for the functions ( );F u υ which are

specified by their Fourier transforms; and then we prove the formula for arbitrary functions by passing to the limit.

Remark 4.5 : As we notice at Remark 4.2 for the case of integration with respect to a Gaussian measure with non zero mean value m we have

( ) ( ) ( ) ( )( ) ( )1 1, , , , ; , , , ,n mX

F A x x A x x g x g x dx =∫ … … c

( ) ( ) ( ) ( )( ) ( )1 1, , , , ; , , , ,n mX

F A x m x m A x m x m g x m g x m dx= + + + + + + =∫ … … c

Page 186: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

174 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( ) ( ); ;n m n

p F d d+ +∫ u u uυ υ υ

where c , is the Gaussian measure c with zero mean value and we have apply Theorem 4.4 for

( ) ( ) ( ) ( )( 1 1 1, 2 , , , , ,nF F A x x A m x A m m A x x= + + +…

( ) ( ) ( ) ( ))12 , , ; , , , ,n n mA m x A m m g x m g x m+ + + +…

Remark 4.6: For case of X be a separable real Hilbert space, matrix S will take the simple

form ( ) ( )( )12 ,kj k jS C I iA g g

−⎡ ⎤ ⎡ ⎤= +⎢ ⎥ ⎣ ⎦⎣ ⎦ξ ξ where C is the covariance operator of the Gaussian

measure.

Consider special cases of formula (9). If ( );F u υ does not depend on u , i.e., ( ) ( );F F=u υ υ ,

then we obtain formula (5). Suppose that ( ) ( );F F=u uυ , i.e., it does not depend on υ , then

( ) ( )( ) ( ) ( ) ( )1 , , , ,n

nX

F A x x A x x dx F dρ=∫ ∫… c u u u

where

( ) ( ) ( ) ( ) ( )1/ 2

2 exp , 2n

nAi D i dρ π

−− ⎡ ⎤= −⎢ ⎥⎣ ⎦∫u u ξξ ξ

( ) ( )2AD i−ξ is the Fredholm determinant of the operator ( )1

n

k kk

A Aξ=

=∑ξ at point 2iλ=− .

Consider still another example

( ) ( )( ) ( ) ( ) ( )( )

( )2

1/ 2exp , , 2 exp

2 2AX

A x x F g x dx D b F db

λ υπ λ λ υ υ

λ− ⎧ ⎫⎪ ⎪⎡ ⎤ ⎪ ⎪⎡ ⎤⎢ ⎥ = −⎨ ⎬⎣ ⎦ ⎪ ⎪⎢ ⎥⎣ ⎦ ⎪ ⎪⎩ ⎭

∫ ∫c

where ( ) [ ]( )1 ,b I A g gλ λ −= −

[.

b) Integration with respect to Independent Increment Processes

For the completion of the Section we also report some useful formulas for the calculation of integrals with respect to measures which correspond to independent increment processes. Measures of this kind are featured by integrals of the form

( )( ) ( ) ( )( )1 , , nF x f x t x t= Δ Δi …

where ( ) ( ) ( )1k k kx t x t x t −Δ = − . Thus for the measure which corresponds to Poisson process

with the characteristic functional

( ) ( ),exp 1ti u x

T

x e duα ρλ⎧ ⎫⎪ ⎪⎡ ⎤⎪ ⎪= −⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎪ ⎪⎩ ⎭∫Y

where

( )[ ]1 , if ,

0 , otherwiset

t u Tuρ

⎧⎪ ∈⎪=⎨⎪⎪⎩,

Page 187: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 175

the following formula holds

( ) ( )( ) ( ) ( )( )

1

1 11, , 0

, , , ,!

i

i

n

kni t

n nik k iX

tf x t x t dx f k k e

kλλ∞

− Δ

==

ΔΔ Δ = ∑ ∏∫

… …c (10)

and for the measures generated by the infinite-dimensional Gamma distribution with the characteristic functional

( ) ( )( )exp ln 1 ,tT

x i u x duσ ρ⎧ ⎫⎪ ⎪⎪ ⎪⎡ ⎤= −⎨ ⎬⎢ ⎥⎣ ⎦⎪ ⎪⎪ ⎪⎩ ⎭∫Y

where

( )[ ]1 , if ,

0 , otherwiset

t u Tuρ

⎧⎪ ∈⎪=⎨⎪⎪⎩

the following formula is valid

( ) ( )( ) ( ) ( )( )

1

1 11

, , , ,kk

kn

utnk

n n tk kX

uf x t x t dx f u u e dt

σ

σ+

Δ − −

Δ=

⎡ ⎤⎢ ⎥Δ Δ = ⎢ ⎥Γ Δ⎢ ⎥⎣ ⎦

∏∫ ∫… …c u (11)

where ( ) 1y uy u e du∞

− −

−∞

Γ = ∫ is Euler’s Gamma function, n+ is the product of the 4th positive

half axes. We shall also give an example for the case of the characteristic functional

( ) ( )21 1

0

expp

u

x x t dt du⎧ ⎫⎪ ⎪⎡ ⎤⎪ ⎪⎪ ⎪⎢ ⎥= −⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎪ ⎪⎣ ⎦⎪ ⎪⎩ ⎭∫ ∫Y (12)

which is the special case of the characteristic functional of form (22) of Chapter 3/Section 3.9 for 1T = , ( )2 2 !p pσ , ( )du duν = and ( )t uρ as we defined it before. Note that (10) may also

be written in the form

( ) ( ) ( ) ( )1 1

1 2 1 2 1 20

exp min , , p p pu

x t t x t x t dt dt⎧ ⎫⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∫ ∫… … … …Y (13)

The following formula holds

( ) ( )( ) ( ) ( ) ( )1 11

, , ,n

n

n k k kkX

f x t x t dx f S t u u d−=

Δ Δ = Δ −∏∫ ∫… c u u (14)

where

( ) 21, exp2

pS u iu dτ τυ υ υπ

= − +∫

is the fundamental solution of the parabolic equation

( )2

121p

pp

S Suτ

+∂ ∂= −

∂ ∂

Using (12) we obtain the following formula

Page 188: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

176 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

1 1

10 0

21 1

10

, ,

2 exp ,n n

X

pn

nk k

k

f a x d a x d dx

f i a s ds d d d

ν

τ

τ τ τ τ τ τ

π υ τ−

=

⎛ ⎞⎟⎜ ⎟⎜ =⎟⎜ ⎟⎜ ⎟⎝ ⎠⎧ ⎫⎧ ⎫⎪ ⎪⎪ ⎪⎛ ⎞⎪ ⎪⎪ ⎪⎟⎜⎪ ⎪ ⎪ ⎪⎟⎜= − −⎨ ⎨ ⎬ ⎬⎟⎜ ⎟⎪ ⎪ ⎪ ⎪⎜ ⎟⎝ ⎠⎪ ⎪ ⎪ ⎪⎪ ⎪⎪ ⎪⎩ ⎭⎩ ⎭

∫ ∫ ∫

∑∫ ∫ ∫ ∫

… c

u u uυ υ

where ( )1

,n

k kk

u υ=

=∑u υ , which is valid under the condition of measurability of ( )1, , nf u u…

and the condition

( ) ( ) ( )2 11 1, , exp , , ,

pp

n nf u u H u uε −⎧ ⎫⎪ ⎪⎪ ⎪− ≤⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭

… …u u

for any 0ε> , where

( )1 1, ,n

n nH u u du du <∞∫ … … .

For more explicit formulas for the calculation of infinite dimensional integrals we refer to EGOROV, A.D. & SOBOLEVSKY, P.I. & YANOVICH, L.A., Functional Integrals: Approximate Evaluation and Applications.

Page 189: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 177

5.4.2. Integrals of Variations and of Derivatives of Functionals. Let functional ( )F x be differentiable along direction a [∈ at any point x X∈ and let

( )sup d F x adλ ε

λλ<

+

be summuble for some 0ε> . Then the following relation holds

( )[ ] ( ) ( )( ) ( ),X X

F x a dx F x a x dxδ =∫ ∫c c

The latter relation follows from the identity

( ) ( ) ( ) ( ) ( )2

2exp exp ,2X X

F x a dx a F x a x dxλλ λ

⎧ ⎫⎪ ⎪⎪ ⎪+ = −⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∫ ∫[

c c

(which is a modification of formula (4)), if we differentiate it with respect to λ and set 0λ= . Moreover the above condition enables differentiation under the integral sign.

The functionals Hermite polynomials may be defined by the equality

( ) [ ] [ ]1

1 2 1 21 2 0

; , , , ; , , ,n

nn

n nn

H x a a a G x a a aλ λ

λ λ λ= = =

∂=

∂ ∂ ∂…

… ……

where

[ ] ( ) ( )1 2, 1 1

1; , , , exp , ,2

n n

n i j i j i ii j i

G x a a a a a a xλλ λ= =

⎡ ⎤⎢ ⎥= − +⎢ ⎥⎣ ⎦

∑ ∑…[

ia [∈ ; the functional [ ]1 2; , , , nG x a a a… is defined for almost all x X∈ . The following

recurrence relation holds for functional Hermite polynomials

( ) [ ] ( ) ( ) [ ]

( ) ( ) [ ]

11 2 1 2 1

12

1 11

; , , , , ; , , ,

ˆ, ; , , , ,

n nn n n

nn

i n i ni

H x a a a a x H x a a a

a a H x a a a

−−

−−

−=

= −

−∑

… …

… …[

(here the “hat” over ia means that ia must be omitted); it is verified by an immediate calculation with the help of the above-mentioned definition of functional Hermite polynomials. The explicit form of them is as follows.

( ) [ ] ( )( )

( ) ( )

( ) ( )2 1 2 2

1

/ 2

1 21 1

1 2 11

1; , , , ,

2 !! 2 !

, ,p p q

n

snnn

n ii s

n s n

j j jp q sj j

H x a a a a xs n s

a a a x−

⎢ ⎥⎣ ⎦

= =

= = +≠ ≠ =

−= + ×

×

∑∏

∑ ∏ ∏…

[

where ( )2 !! 2 4 2s s= ⋅ ⋅ ⋅… , ⎢ ⎥⎣ ⎦i is the floor function. The first three Hermite polynomials are

( ) [ ] ( )1 ; ,H x a a x= ,

( ) [ ] ( )( ) ( )21 2 1 2 1 2; , , , ,H x a a a x a x a a= −

[,

Page 190: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

178 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( ) [ ] ( )( )( ) ( ) ( )

( ) ( ) ( ) ( )

31 2 3 1 2 3 1 3 2

1 2 3 2 3 1

; , , , , , , ,

, , , ,

H x a a a a x a x a x a a a x

a a a x a a a x

= − −

− −[

[ [

,

Note also the special case of 1 2 na a a a= = = =…

( ) [ ] ( ) [ ]( )

( ) ( )( )

/ 22 2

1

1 !; ; , , , ,

2 !! 2 !

jnj n jn n

j

nH x a H x a a a a a x

j n j

⎢ ⎥⎣ ⎦−

=

−= =

−∑…[

The following formula holds

( )[ ] ( ) ( ) ( ) [ ] ( )1 2 1 2, , , ; , , ,nnn n

X X

F x a a a dx F x H x a a a dxδ =∫ ∫… …c c

provided that these integrals exist and

11

supn

i iin

F x aλ ε

λλ λ< =

⎛ ⎞∂ ⎟⎜ + ⎟⎜ ⎟⎜ ⎟∂ ∂ ⎝ ⎠∑…

is summable for some 0ε> . The formula follows immediately from the definition of the n-th order variation, the definition of Hermite polynomials given at the beginning of the section, and relation (4) or its modification also given at the beginning of the section.

Page 191: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 179

5.4.3. Kernel Characteristic Functionals We shall now use the theorems for analytical calculations of infinite dimensional integrals presented above. As we mention at the introduction of the section we will use a convex superposition of kernel characteristic functionals, i.e. functionals that correspond to probability measure which are local into the functional space (kernel probability measures). These analytical calculations, will allows us to get a system of differential equations for the parameter functions of the kernel characteristic functionals. First we will give a more precise definition of kernel characteristic functionals and some simple examples. Then we will illustrate the proposed method, by applying to a first order equation with cubic nonlinearity, which has been studied at Section 5.3.4 of the present chapter. In following section, we will also present the theorems concerning the approximation of an induced probability measure of finite dimension (density) from a kernel type representation. Thus, let us introduce a precise definition of kernel characteristic functionals

Definition 4.7 [( ), hε −Kernel Characteristic Functionals] : Let [ be a linear topological

space(1) which will be mainly thought of as a space of functions defined on the time interval I . Let also ( )U [ denote the σ− field generated by the open subsets of [ (the Borel σ− field of [ ), m∈[ , h +∈ and ,m hh be a neighbourhood of m with concentration parameter (e.g. radius, in the case of metric spaces) h . [For example, ,m hh can have the form

, :m h u u m h∈ − ≤= [h if an appropriate norm is available.]

Given 0ε> , we shall say that a characteristic functional ( ); ,u m hY is an ( ),hε −kernel

characteristic functional(2) iff the associated probability measure c on ( )U [ is

concentrated around the element m∈[ in the sense that

( ), 1m h ε≥ −hc .

The existence of ( ),hε −kernel ch. functionals, for any given 0ε> and 0h> (relatively

small), will be established in Section 5.4.3a by means of construction of a specific example.

Remark 4.8 : To understand the above definition we study the space of real valued continuous functions with the sup norm defined on the time interval [ ]0,I T= . Then an arbitrary set of the

form ( ) ( ) , : supm ht I

u u t m t h∈

∈ − ≤= [h will be a set of functions that take values within

the shaded region

(1) Reflexive sometimes Banach or Hilbert (2) The terms ε−kernel ch. functional or –simpler– kernel ch. functional will also be used if no confusion is likely to occur.

Page 192: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

180 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

As 0h → the above region will be reduced to the element m . The form of the characteristic functional/probability measure is evolved from the parameter h (bandwidth) which adjust the variance of the probability mass about the element m . Usually the parameter h is taken us the maximum value of the variance, ( )

1/ 2,C t t⎡ ⎤

⎣ ⎦ and m as the mean value element of the probability

measure.

The notion of ( ), hε −kernel ch. functionals will now be exploited to establish a fundamental

result concerning the approximation of any probability measure (at the level of projection into finite-dimensional subspaces). More precisely, we shall prove that any finite-dimensional projection of a given probability measure c on ( )U [ , can be approximated by the

corresponding projection of an appropriate convex superposition of kernel ch. functionals.

Let [c be a given probability measure defined into the measurable space ( )( ),[ U [

and let LΠ be a projection of [ into the finite-dimensional subspace L . We then have the projection (See Definition 1.6/Chapter 3)

( ) [ ]( ) [ ] ( ) ( )1 ,:L L LB B x x B B LΠ Π Π−= = ∈ ∈ ∈[ [c c c [ U

which is a probability measure defined into the finite dimensional subspace L . Let also the set of all finite linear combinations (convex) of the form

( ) ( )1

; ,n

nk k k

k

p m h=

=∑i ic c , with 0np ≥ , 1, ,n N= … and 1nn

p =∑ (15)

where every ( ); ,k km hic ( )1, ,k n= … is a kernel probability measure. Similarly we have the

projection of the kernel probability measure ( ) [ ]( ) [ ] ( ) ( )1 ,:

L

n n nL LB B x x B B LΠ Π Πc c c [ U−= = ∈ ∈ ∈

We then have the following

Theorem 4.9 : For every projection LΠ from [ into the finite dimensional subspace L , the set of measures ( )

L

n BΠc is dense into the set of all continuous LΠc with the same support.

Proof : See ATHANASSOULIS, G.A. & GAVRILIADIS, P.N., 2002, The Truncated Hausdorff Problem solved by using Kernel Density Functions. Probabilistic Engineering Mechanics, 17, 273-291.

( )x t

t T0

( )m t , : supm ht I

u u m h∈

∈ − ≤= [h h

Page 193: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 181

Remark 4.10 : In other words, for every measure ( )i[c , which has continuous projection

LΠc into the finite dimensional subspace L and every 0ε> , given a kernel measure kc ,

there is a finite set of mean value elements [ ]suppkm ∈ [c , and real numbers kh and kp

( )1, ,k n= … satisfying conditions (15) such that

( ) ( ) ( ),L L

nB B B LΠ Πc c Uε− < ∈ (16)

Remark 4.11 : By assuming the necessary continuity conditions we can have the corresponding results for the characteristic functional.

Remark 4.12 : We must note that the kernel property of a probability measure pass along to all of its projections on finite dimensional subspaces in the sense that for every projection LΠ the probability measure

LΠc has the kernel property, i.e.,

( ) ( ) ( )1, , , 1:

L m h L m h L m hx BΠ Π Π ε− ⎡ ⎤ ⎡ ⎤= = ∈ ∈ ≥ −⎣ ⎦ ⎣ ⎦h h hc c c [ .

Page 194: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

182 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

5.4.3a Gaussian Kernel Characteristic Functionals The most immediate application of the above definition is for the case of Gaussian characteristic functionals. Functionals of this type have the advantage of an explicit form for both the characteristic functional and the probability density functionals. Suppose we are working in a suitable space of functions defined on the time interval [ ]0,I T= . Then the

following theorem due to D.G. Konstant and V.I. Piterbarg (Extreme values of the cyclostationary gaussian random process) gives a set of sufficient conditions for a Gaussian probability measure to be of kernel type with the sup norm.

Theorem 4.13 : Let ( );X t β , t I IR∈ ⊆ , be a Gaussian random process which is differentiable

in mean square, with (differentiable) mean-value function ( ) :m I →i and set

( ) ( )( )22 ; 0td E X t m tβ β⎡ ⎤′ ′= − >⎣ ⎦ . The covariance function ( ), :C I I× →i i satisfies the

strict inequality ( ) ( ) ( ), ,C t s t s t sσ σ< ≠ , and the variance ( ) ( )2 ,t C t tσ = reaches its

maximum ( )2 20 0tσ σ= at a unique point 0t I∈ . Moreover, we assume that there exist two

constants, 0Α> , 0b> , defining the local structure of ( ) ( )2 ,t C t tσ = at the vicinity of 0t I∈

such that ( ) ( )0 0 0 0,b bt t t o t t t tσ σ= −Α − + − →

Then 2b≥ (because of the differentiability), and 1. If 2b> , then

( ) ( )( )( )

2 20

1 1 21 00

: max

1 exp 1 1 , ,2

t I

b

b b

u u t m t h

d hh o hb b σ

π σ

+

∈ − >

⎛ ⎞⎛ ⎞ ⎟⎜⎟ ⎡ ⎤⎜ ⎟= Γ − + →∞⎜⎟⎜ ⎟ ⎣ ⎦⎟ ⎜⎜ ⎟⎜⎝ ⎠ ⎝ ⎠Α

=[c

2. If 2b = , then

( ) ( )( )

( )2 2

100 2

0 0

: max

1 exp 1 1 , ,2 2

t Iu u t m t h

d hh o hσσ σ

∈ − >

⎛ ⎞ ⎛ ⎞⎟ ⎟⎜ ⎜ ⎡ ⎤⎟ ⎟= + − + →∞⎜ ⎜⎟ ⎟ ⎣ ⎦⎜ ⎜⎟ ⎟⎜ ⎜Α⎝ ⎠ ⎝ ⎠

=[c

where ( ) ( )( ) 0

22 20 0 0; 0td E X t m t dβ β⎡ ⎤′ ′= − = >⎣ ⎦ .

Proof : For the proof we refer to the paper of KONSTANT, D.G. & PITERBARG, V.I., 1993, Extreme values of the cyclostationary gaussian random process. Journal of Applied Probability.

Remark 4.14 : Note that the condition of differentiability is crucial. For example, if consider any stochastic process with no memory at all (see Chapter 2, Section 2.2.1.a) then the probability of the maximum will be 0 or 1 (0 or 1 Kolmogorov Law). To illustrate the above consider a white noise process with constant intensity D . Then the maximum between n time instants will be

Page 195: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 183

( )

( )1

2

112, , 2

1: max exp22n

n

h h kk

knt t t n

uu u t h du du

DDπ=

∈−∞ −∞

⎛ ⎞⎟⎜ ⎟⎜ ⎟⎜⎛ ⎞ ⎟⎜ ⎟⎟⎜ ∈ > − =⎜ ⎟⎟⎜ ⎟ ⎜⎜ ⎟⎝ ⎠ ⎜ ⎟⎟⎜ ⎟⎜ ⎟⎜⎝ ⎠

=∑

∫ ∫…… …[c

( )

2

11 2

2

1 exp 022

n

mk

nh

udu

DDε

π

∞=

⎛ ⎞⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟− = →⎜ ⎟⎜ ⎟⎜ ⎟⎟⎜ ⎟⎜ ⎟⎜⎝ ⎠

=∑

∫ , as m →∞ .

Hence for the case of a Gaussian measure as described at Theorem 4.13 we will the description by the family of one-dimensional distributions,

( ) ( )2,

2 ,1; ,2 ,

t m xaCx x

x Ga y x y a e dtCx xπ

−−

−∞

= ∈ ≤ = ∫c c [ , x∈[ , a ∈

Setting to the above ( ) ( )x tδ= −i i we have the probability of values at a specific time t,

( ) ( )

( )( )( )

2

2 ,1;2 ,

t m taC t t

G y y t a e dtC t tπ

−−

−∞

∈ ≤ = ∫c [ , a ∈ , t I∈

Thus, we see that the probability mass is concentrated to functions that have small distance from the mean

value element :m I → . By ‘small distance’ we mean that ( ) ( )( ) 2supt I

x t m t h∈

− ≤ .

Moreover we see that for 0h → the probability is concentrated to the mean value element.

For a convex superposition the probability mass is shared between the supports of the

Gaussian measures. More precisely let the convex superposition 1 2

1 12 2G G G= +c c c

with the Gaussian measures 1 2,G Gc c described by the parameters 1 2,m m and 1 2,C C

respectively. This kind of description is very essential for the representation of nonlinear dynamical system’s response.

( )x t

t T0

( )1m t

( )2m t

( )1/ 2

1max ,t I

C t t∈

⎡ ⎤⎣ ⎦

( )1/ 2

2max ,t I

C t t∈

⎡ ⎤⎣ ⎦

Page 196: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

184 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

We must note that for the case described above, we have more precise information for the description of the probability mass. This is because we have the exact variation (through the variance ( )

1/ 2,C t t⎡ ⎤

⎣ ⎦ ) from the mean element m for each time instant t . In other words we know

that, given a covariance function with diagonal ( ),C t t and the mean value element ( )m t , the

probability mass will be concentrated into the functional set

( ) ( ) ( ) 1/ 2: ,u u t m t C t t⎡ ⎤∈ − ≤ ⎣ ⎦= [h

The figure below illustrates graphically the above assertion for a given ( ),C t t and ( )m t .

Another interesting subject connected with the above discussion is the characterization of the derivative of the stochastic process. In Chapter II we saw that having the characteristic function of a stochastic process gives us the opportunity to get statistical information for the derivatives of the stochastic process. Now we are going to discuss how we can get qualitative results for the probability mass of the derivative of the stochastic process, by having the behaviour of the correlation operator. We must say that the same arguments hold for a general stochastic process, so we can make the desired generalizations. Now, the correlation function of a mean-square continuous stochastic process can be expanded into a biorthogonal series of the form (Kahurent-Loeve expansion)

( ) ( ) ( )1

, i i ii

C t s f t f sλ∞

=

=∑ (17)

where ( ) 1,i i i

f tλ∞

= are the eigenvalues – eigenfunctions of the correlation operator

( ) ( ) ( ), ,I I

Cx x C t s x t x s dtds×

= ∫∫

Using the results of Chapter 3 we can have the distribution of the values for the derivative of the stochastic process

( ),C t t

Tt

( )x t

Tt

Page 197: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 185

( ) ( ) ( ) ( ) ( )00;X t t tP a y y t a aδ′ ′ −

′= ∈ ≤ =c [ c

Hence

( ) ( )

( )( )( )

2

22

1; exp ,2,

2

a

s ts t

G

t m ty y t a dtC t s

C t s t st s

π−∞

=

=

⎧ ⎫⎪ ⎪⎪ ⎪′⎪ ⎪−⎪ ⎪⎪ ⎪−⎪ ⎪′∈ ≤ = ⎨ ⎬∂⎪ ⎪∂ ⎪ ⎪⎪ ⎪∂ ∂⎪ ⎪⎪ ⎪∂ ∂ ⎪ ⎪⎩ ⎭

∫c [ , a ∈ , t T∈

Thus we must investigate the behaviour of the function ( )2 ,

s t

C t st s

=

∂ ∂ or using (17) the

behaviour of the function

( )( )

22

1

,i i

is t

C t sf t

t sλ

==

∂ ⎡ ⎤′= ⎣ ⎦∂ ∂ ∑

Hence we can conclude that when the diagonal of the correlation function, ( ),C t t change

slowly with time, then the probability mass for the derivative of the stochastic process will concentrate very close to the derivative of the mean value element. Conversely, if the diagonal of the correlation function, ( ),C t t change fast with time, even if decreases the probability mass

will spread widely around the derivative of the mean value element.

The above results, although easy to derive, gives us a very important physical intuition for the distribution of the probability mass into the infinite dimensional space of functions. Moreover, based on the analysis, presented for the simple case of Gaussian measures, one can produce valuable results for general functionals (for example Gamma characteristic functional), or for the responses of dynamical systems. Hence for responses having the form of convex superposition of Gaussian functionals, which will be used in the sequel, we can immediately have the picture of the probability measure.

( )x t′

T t

( ),C t t

Tt

( )x t′

Tt

( ),C t t

Tt

Page 198: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

186 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

5.4.4. Superposition of Kernel Characteristic Functionals for the numerical solution of FDEs In the present section we will apply the representation described above for the numerical solution of functional differential equation. Theorem 4.9 in combination with the linearity of the Functional Differential Equations is the basic reason for the use of such representations to the numerical solutions of stochastic differential equations. In what follows we will see how we can get equations describing efficiently the evolution of the unknown parameters. Now let us describe in more detail the representation of the joint characteristic functional that we will use for the sequel. We assume that

a1. The random excitation is described by a stochastic process ( ); : MI×Ω→i iy defined

onto the probability space ( )( )Ω , Ω ,U c with the associated probability space

( ) ( )( ), ,, ,M Mc cC I C∞ ∞U c y , and with characteristic functional Y y .

a2. The response is described by the stochastic process ( ) ( )( ); , ; : N MI×Ω→ ×i i i ix y

defined onto the probability space ( )( )Ω , Ω ,U c with the associated probability

space ( )( ), , , ,, ,N M N Mc c c cC C C C∞ ∞ ∞ ∞× ×U c xy , and with characteristic functional Y xy .

a3. The initial state of the system is described by a random variable ( )0 : NΩ→ix with

the associated probability space ( )( )0, ,M MU c x with characteristic function 0Y .

(Where we have assumed that for the initial state the random variables ( )0; : NΩ→ix

and ( )0; : MΩ→iy are independent).

Consider the representation (15) of the joint probability measure using kernel probability

measures with fixed real numbers 1 , 1, ,kp k nn

= = … .

( ) ( )1

1, , ; , , ,n

n k k k k

kn =

= ∑c cxy xy x y x yx y x y m m h h , , ,,N Mc cC C∞ ∞∈ ∈x y (18)

with ( ), ; , , ,k k k kc xy x y x yx y m m h h being a kernel probability measure (Definition 4.7) and ,k N

cC∞∈xm , ,k McC∞∈ym , k N∈xh , k M∈yh for all 1, ,k n= … . The characteristic functional

corresponding to the above probability measure is

( ) ( )1

1, , ; , , ,n

n k k k k

kn =

= ∑YY xy xy x y x yu v u v m m h h , , ,,N Mc cC C∞ ∞∈ ∈u v (19)

with Y xy being the characteristic functional of the kernel probability measure c xy .

Remark 4.15 : Note that the parameters , , ,k k k kx y x ym m h h are not necessarily defining uniquely

the kernel probability measures. In other words those parameters, express the kernel property of the probability measure c xy and they result from the specific parameters of the kernel

representation. For example, in the case where c xy is a Gaussian kernel measure, , ,k kx ym m will

be the mean value elements and ,k kx yh h will be related with the correlation operator (more

Page 199: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 187

specifically with the variance vector). Hence, for the sequel we emphasize, that, when we refer to the determination of the unknown parameters , , ,k k k k

x y x ym m h h we will mean the determination

of the unknown parameters that define uniquely the kernel probability measure (for the Gaussian measure, those will be the mean value and the correlation operator) and hence they result the above quantities.

The characteristic functional (19) must satisfy the Functional Differential Equation governing the dynamical system, as well as, conditions a) and c) described above. Hence the characteristic functional must verify the conditions

b1. Related with condition a1 for the characteristic functional of the excitation, we must have

( ) ( ) ( ) ( )1

1, ; ,n

n n k k

kn =

= = =∑0 YY Y Yxy y y y y yv v v m h v , ,McC∞∈v (20)

To verify the last condition we can solve the approximating problem defined by equation (20)and determine the unknown parameters ,k M

cC∞∈ym , k M∈yh for all 1, ,k n= … . Alternatively

we can set

( ) ( ); ,k k =Y Yy y y yv m h v , ,McC∞∈v , 1, ,k n= … (21)

In the latter case we must have a suitable kernel characteristic function such that (21) holds. This can be generally succeeded when the excitation process is described by a characteristic functional with explicit form (for example Gaussian or Gamma characteristic functional etc.). If this not the case we must determine the suitable parameters for the kernel characteristic functionals. In that case Theorem 4.3 guarantees that the equation (20) can be approximately satisfied.

b2. Related with condition a3 for the initial state of the system we must have

( ) ( )0 0,n n

t tδ δ⋅ = ⋅ =0Y Yxy xυ υ

( ) ( )0 0

1

1 ; ,n

k kt

knδ

=

= ⋅ =∑Y Yx y ym hυ υ , N∈υ (22)

As in the previous condition b1, similarly this condition will be satisfied by solving equation (22). The resulted values of the parameters will constitute the initial conditions for the set of differential equations that will be formulated using the Functional Differential Equation. Note that for the special case, where the initial states of the excitation and response are depending with each other the condition described above will take the form

( )0 0,n

t tδ δ⋅ ⋅ =Y xy υ ν

( ) ( )0 0 0

1

1 , ; , , , ,n

k k k kt t

knδ δ

=

= ⋅ ⋅ =∑Y Yx x y x ym m h hυ ν υ ν , ,N M∈ ∈υ ν (23)

where ( )0 ,Y υ ν will be the joint characteristic functional of the random variable

( ) ( )( )0; , 0; : N MΩ→ ×i ix y describing the initial state of the system.

Page 200: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

188 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

Equations governing the parameters of the kernel characteristic functionals We have determined the essential conditions for the parameters of the representation (19) so that conditions a1 and a3 are verified. We shall now derive an equivalent set of differential equations with the functional differential equation for the undetermined parameters of the kernel characteristic functionals. This set of equations will be formulated using a Gallerkin approach for the probability measures by direct application of kernel probability measures. In other words, if formally, we could write the linear operator equation for the probability measure of the solution as

( ), 0⎡ ⎤ =⎢ ⎥⎣ ⎦c xy x yL , for every , ,,N Mc cC C∞ ∞∈ ∈x y (24)

then, using the kernel representation (18) and integrating equation (24) with respect to every probability measure we would have the equivalent set of equations

( ) ( ), ,1

, , 0 , 1,N M

c c

n k j

k C C

d d j n∞ ∞= ×

⎡ ⎤ = =⎢ ⎥⎣ ⎦∑ ∫∫ …xy xyx y x yL c c (25)

Alternatively we could multiply with every kernel probability measure, instead of integrating. As a result we would the set of equations

( )1

, 0 , 1,n k j

k

j n=

⎡ ⎤ ⋅ = =⎢ ⎥⎣ ⎦∑ …c cxy xy x yL , for every , ,,N M

c cC C∞ ∞∈ ∈x y (26)

Then we can integrate with respect to every measure and get a set of equations for the parameters of the kernel probability measures. This above method seems very promising, but the main problem is that in general (an important exception is the Fokker-Planck equation), we cannot have in an explicit form the operator L . Additionally it’s very difficult to operate with probability measures, since they are set functions. Instead of this, we have for a very wide class of dynamical systems an explicit form of the operator governing the characteristic functional. Such operators were studied, extensively in the present chapter at Section 5.2. Thus, we can express the dynamics of the system with an operator equation (related to (24) if exists) of the form

( ), 0⎡ ⎤ =⎢ ⎥⎣ ⎦Y xy u vL , for every , ,,N Mc cC C∞ ∞∈ ∈u v (27)

Moreover we have a wide class of explicit formulas for characteristic functionals as well as theorems for the analytical calculation of infinite dimensional integrals of those functionals. Hence, it would very interesting, if we could derive an equivalent set of equations with equations (25) or (26) using the characteristic functional approach. For simplicity we will first present the subject for the case of a probability measure defined on a one-dimensional space. Such a case can be met on a one dimensional dynamical system under independent increments excitation.

Hence the operator equation (24) will take the form of a partial differential equation for the probability density function

[ ]( ) 0f x =L , x ∈ (28)

For the same case we would have partial differential equation for the characteristic function φ

Page 201: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 189

[ ]( ) 0uφ =L , u ∈ (29)

Of course the following relation diagram holds,

[ ]( ) [ ]( )

( ) ( )

0 , 0 ,

, ,

f x x u u

f x x u u

φ

φ

⎯⎯⎯→= ∈ = ∈←⎯⎯⎯

⎯⎯⎯→∈ ∈←⎯⎯⎯

-1

-1

FFFF

L L

where F denotes the Fourier transform operator. Now, consider the operator

[ ] ( ) [ ]( ) ( )2

21, ; ,2

v

u u v v e dv uσφ ψ σ φ ψσ π

∞−

−∞

⎡ ⎤ = − ⋅ ⋅ ∈⎣ ⎦ ∫T L L (30)

and ,φ ψ are characteristic functions. Moreover the operator equation

[ ] ( ), ; 0 , ,u uφ ψ σ ψ⎡ ⎤ = ∈ ∈⎣ ⎦T L (31)

can be easily verified that is equivalent with (29). We will study the behaviour of the above operator for extreme values of the parameter σ . For 0σ→ we will have

[ ] ( ) [ ]( ) ( ) [ ]( )2

20

1lim , ;2

v

u u v v e dv uσσ

φ ψ σ φ ψ φσ π

∞−

→−∞

⎡ ⎤ = − ⋅ ⋅ =⎣ ⎦ ∫T L L L , u ∈

Hence,

[ ] ( ) [ ]( )0

lim , ; 0 , 0 ,u u u uσ

φ ψ σ φ→

⎡ ⎤ = ∈ ⇔ = ∈⎣ ⎦T L L

Thus, for 0σ→ operator equation (31) interprets equation (24) in terms of the characteristic function. In contrast for σ→∞ we will have

[ ] ( ) [ ]( ) ( )2

21lim , ;2

v

u u v v e dvσσ

σ φ ψ σ σ φ ψσ π

∞−

→∞−∞

⎡ ⎤⋅ = − ⋅ ⋅ =⎣ ⎦ ∫T L L

[ ]( ) ( )u v v dvφ ψ∞

−∞

= − ⋅∫ L , u ∈

Hence,

[ ] ( ) [ ] ( )lim , ; 0 , 0 ,u u u uσ

σ φ ψ σ φ ψ→∞

⎡ ⎤ ⎡ ⎤⋅ = ∈ ⇔ ∗ = ∈⎣ ⎦ ⎣ ⎦T L L (32)

Thus, for 0σ→ we have (when operator L exists)

[ ] ( ) [ ]( ) ( )lim , ; 0 , 0 ,u u f x g x xσ

σ φ ψ σ→∞

⎡ ⎤⋅ = ∈ ⇔ ⋅ = ∈⎣ ⎦T L L (33)

where [ ]1g ψ−=F .

Now let us represent the solution of (28) using kernel representation kf

( ) ( )1

1 n

kk

f x f xn =

= ∑ (34)

Page 202: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

190 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

Hence, the characteristic function will have the form

( ) ( )1

1 n

kk

x xn

φ φ=

= ∑ (35)

Applying the above representation into (33) we will have the set of equations

( )1

lim , ; 0 , , 1, ,n

k lk

u u l nσ

σ φ φ σ→∞

=

⎡ ⎤⎡ ⎤⎢ ⎥⎢ ⎥⋅ = ∈ =⎢ ⎥⎢ ⎥⎣ ⎦⎣ ⎦∑ …T L (36)

Because of (33) the above set will be equivalent wit the set of equations

( ) ( )1

0 , , 1, ,n

k lk

f x f x x l n=

⎡ ⎤ ⋅ = ∈ =⎢ ⎥⎣ ⎦∑ …L (37)

Thus we have derived equations (36) that interpret equations (26) or (37) in terms of the characteristic function. We must note that for many kernel probability measures it’s not necessary to get σ→∞ to achieve equivalence of (31) with (37). More specifically for kernels characteristic functions that have compact support (like Gaussian measures) it’s sufficient to

have a value of σ such that 2

0exp 12vσ

⎛ ⎞⎟⎜ ⎟− ≈⎜ ⎟⎜ ⎟⎜⎝ ⎠ where 0v is the radius of the support of the kernel

characteristic function f (if exists). By inference we have achieved to derive an equivalent reformulation of the characteristic function differential equation (29) described by equation (31) which has the property to interpret (for special choice of the parameter σ ) equation (26) or (37).

Motivated from the above discussion we generalize for the general case where no explicit formula for the operator L exists. So, let us consider the general case of a system as described at the beginning of the present section (conditions a1, a2, a3) and let the associated functional differential equation described by the operator

( ), 0⎡ ⎤ =⎢ ⎥⎣ ⎦YL xy u v , , ,,N Mc cC C∞ ∞∈ ∈u v (38)

Then, based on the analysis for the finite dimensional case described above we consider the operator

[ ] ( ) [ ]( ) ( ) ( ), ,

, ; , , , ,N M

c cC C

C d d∞ ∞×

⎡ ⎤ = − − ⋅ ⋅⎣ ⎦ ∫∫Y Z Y Z c Gu v u w v z w z w zT L L (39)

for every , ,,N Mc cC C∞ ∞∈ ∈u v , where the measure c G denotes a Gaussian measure into the

measurable space ( )( ), , , ,,N M N Mc c c cC C C C∞ ∞ ∞ ∞× ×U with correlation operator C , centred at

zero. When the norm of the correlation operator C tends to zero equation (39) will formally, interpret equation (24) and when the norm of the correlation operator C is sufficiently large equation (39) will formally tends to interpret the equation

( ) ( ), , 0⎡ ⎤ ⋅ =⎢ ⎥⎣ ⎦c dL xy x y x y , for every , ,,N Mc cC C∞ ∞∈ ∈x y (40)

where d is the measure associated with the characteristic functional Z . Now consider the representation (19) satisfying conditions b1 and b2. Applying it at the operator equation

Page 203: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 191

[ ] ( ), ; , 0C⎡ ⎤ =⎣ ⎦Y ZT L u v for every , ,,N Mc cC C∞ ∞∈ ∈u v (41)

and setting k

=Z Y xy (3) for 1, ,k n= … we will have the set of equations,

( )1

, ; , 0n j k

j

C=

⎡ ⎤⎡ ⎤ =⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦⎣ ⎦∑ Y YT L xy xy u v , , ,,N M

c cC C∞ ∞∈ ∈u v , 1, ,k n= … (42)

We emphasize that all the infinite dimensional integration described above can be carried out analytically using the results of Section 5.4.1 of the present chapter. Hence we have achieved to interpret equation (26) in terms of the characteristic functional. We must emphasize that in the above discussion we don’t prove anything about equivalence between equations (40) and and (41) or between (42) and (26). We just argue using finite dimensional relatives that an efficient way to produce equations from the characteristic functional equation is by using operator (39). In this way achieve localization in the probability measure domain. Analogous comments concerning the correlation operator C can be made as in the case of the finite dimensional case. Thus if we choose the correlation operator such that the rate of decay of probability for the elements , ,,N M

c cC C∞ ∞∈ ∈w z is less that the rate of decay of the values of the characteristic functionals [ ]( ) ( ), ,− − ⋅Y Zu w v z w z for every , ,,N M

c cC C∞ ∞∈ ∈u v then

we would have achieved the desired result. Finally since the equations (42) expresses the desired properties for the probability measures we can either integrate with respect to a Gaussian measure over , ,,N M

c cC C∞ ∞∈ ∈u v analytically (again we choose a zero centred Gaussian measure with suitable correlation operator as before), or we can take moment equations for every equation (42) using the results of Section 5.3.2 of the present chapter. We must also notice that another approach for the derivation of simpler set of equations from the functional differential equation is based on the direct integration of the equation (38) with respect to various probability measures. Is should be emphasized that since we relate characteristic functionals with probability measures the choice of the probability measures should be very careful if we want to maintain all the necessary information including into equation (38). The above it will not be studied into the present work. Now let us illustrate the above discussion by studying a specific system

Example 4.16 : To describe the proposed method we will study the specific case of a first order stochastic differential equation with cubic nonlinearity, i.e. the system

( ) ( ) ( )3; ; ;x t ax t y tβ β β+= a.s. (43)

( ) ( )0 0;x t xβ β= , a.s. (44)

All the involved functions are assumed to be real valued. We have seen that the associated functional differential equation has the form

(3) The notation xy

kY for ( ), ; , , ,k k k k

xy x y x yu v m m h hY will also be used if no confusion is likely to occur.

Page 204: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

192 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( )( )

( )( )

( )( )

3

3

, , ,xy xy xyu v u v u vd adt u t v tu t

δ δ δ

δ δδ

⎡ ⎤⎢ ⎥ =− +⎢ ⎥⎢ ⎥⎣ ⎦

Y Y Y (45)

( ) ( )0 0,0 ,xy υδ υ υ= ∈Y Y (46)

where we have assumed that the same conditions a1, a2 ,a3 mentioned before, are fulfilled. Consider the representation of the joint characteristic functional for the response and the excitation

( ) ( )1

1, , ; , , ,n

n k k k kxy xy x y x y

k

u v u v m m h hn =

= ∑Y Y (47)

where a) ( ), ; , , ,k k k k

xy x y x yu v m m h hY ( )1,k n= … are the characteristic functionals corresponding to

the kernel probability measures ( ), ; , , ,k k k kxy x y x yx y m m h hc as described at Definition 4.7.

b) The characteristic functional is compatible with the characteristic functional of the

excitation process, i.e., ( ) ( ) ( )1

10, ; ,n

n n k kxy y k y y y

k

v v p v m hn =

= = ∑Y Y Y . Hence the

parameters ,k ky ym h ( )1,k n= … are given.

c) The characteristic functional satisfies the initial conditions (44) or (46), i.e.,

( )( ) ( )( ) ( )( )1

1,0 ; ,n

n n k kxy x y x x

k

m hn

υδ υδ υδ=

= = ∑i i iY Y Y (48)

Substituting representation (47) into the functional differential equation (45) we have

( )( )

( )( )

( )( )

3

31 1 1

, , ,k k k

n n nxy xy xy

k k k

u v u v u vd adt u t v tu t

δ δ δ

δ δδ= = =

=− +∑ ∑ ∑Y Y Y

(49)

Multiplying equation (49) with ( ),l

xy w u z v− −Y and integrating over a Gaussian measure with zero mean value and covariance operator C , with respect to ,u v we have the set of equations ( )1,l n= … for every ,c cu C v C∞ ∞∈ ∈

( )( )

( ) ( )

( )( )

( ) ( )

( )( )

( ) ( )

1

3

31

1

,, ,

,, ,

,, ,

c c

c c

c c

kn lxy

xy Gk C C

kn lxy

xy Gk C C

kn lxy

xy Gk C C

w zd u w v z dw dzdt w t

w za u w v z dw dz

w t

w zu w v z dw dz

z t

δ

δ

δ

δ

δ

δ

∞ ∞

∞ ∞

∞ ∞

= ×

= ×

= ×

− − =

=− − − +

+ − −

∑ ∫

∑ ∫

∑ ∫

YY c

YY c

YY c

(50)

Using the notation

( ) ( ) ( ) ( )1 2 1 2, ,c c

G

C C

u w u dw C u∞ ∞×

⎡ ⎤− = ⎢ ⎥⎣ ⎦∫ Y Y c Y YT

we have the set of equation ( )1,l n= … for every ,c cu C v C∞ ∞∈ ∈

Page 205: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 193

( )

( )( )

( )

( )( )

3

31 1

1

, , , ,

,

, ,

, ,

k kn nl lxy xy

xy xyk k

kn lxy

xyk

d C u v a C u vdt u t u t

C u vv t

δ δ

δ δ

δ

δ

= =

=

⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥

=− +⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

⎡ ⎤⎢ ⎥

+ ⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

∑ ∑

Y YY Y

YY

T T

T

(51)

The above set of equations formulates a set of functional differential equations for the parameters of the kernel characteristic functionals with initial conditions defined from equation (48). The difference with the prior functional differential equation (49), is that every equation of the last set, controls a specific part of the probability space and is coupled with the other parts through the coupling of the functional differential equation set (51). Since the integrals are over Gaussian measures and the integrated functionals involve linear and quadratic forms, Theorems 4.1 and 4.4 can be applied for the analytical calculation. We will not pass into details for the analytical calculation of the above integrals. The resulted set of functional equations can be numerically solved by either integrate them analytically with respect to a suitable Gaussian measure or by getting moment equations using the result of Section 5.3.2 of the present chapter. In the following section we will try to simplify further the set of functional equations (42) and get numerical results without even make the analytical calculations of the functional integrals. In the next section a special simplification will be applied to equations (50) to get numerical results without having to calculate the above integrals.

Page 206: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

194 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

5.4.5. Simplification of the set of FDE’s governing the Kernel Characteristic Functionals. In this section we shall simplify the set of functional differential equations (42) to obtain a set of ordinary differential equations for immediate numerical solution. Hence, based on the following simplification we will not need to evaluate analytically the functional integrals. However we must emphasize, that in this way, we just illustrate the validity of the proposed method. We don’t expect to have the best possible numerical results. For this reason the following analysis will mainly concern the first order system with cubic nonlinearity, described at the previous sections.

( ) ( ) ( ) ( )3x t kx t ax t y t+ + = (52)

Analogous arguments can be used for the analysis of higher dimensional systems with more complex polynomial nonlinearities. For the numerical implementation we will use Gaussian kernel characteristic functionals.

The simplification will be based on the restriction of the concentration parameter (for Gaussian kernel characteristic functionals this parameter will be the variance) to sufficiently small values such that the interaction between every pair of Gaussian kernels could be considered negligible. However this assumption cannot hold for a great time since the equations governing every kernel leads to greater values of the concentration parameter. Thus, to avoid growth of the concentration parameter we approximate, when a variance exceed a certain value, the current probability measure by another set of kernel measures with different amplitudes and smaller variances. Then this set of kernel measures evolutes with respect to the dynamical equations until some concentration parameter exceed a certain value. The accuracy of the proposed method mainly depends on the threshold value of the concentration parameter.

Hence we use a kernel characteristic functional representation

( ) ( ), ,1

; , ,n

nk x k y k k

k

p m m h=

=∑i iY Y , with 0np ≥ , 1, ,n N= … , 1nn

p =∑ (53)

where every ( ), ,; , ,x k y k km m hiY ( )1, ,k n= … is a Gaussian kernel characteristic functional.

Based on the above assumptions we can conclude that the set of equations (42) will take the form

( ), ; , 0k k

C⎡ ⎤⎡ ⎤ =⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦⎣ ⎦xy xy u vY YT L , , 1, ,k n= … (54)

or,

( ), 0k⎡ ⎤ =⎢ ⎥

⎣ ⎦xy u vYL , , ,,N Mc cC C∞ ∞∈ ∈u v , 1, ,k n= … (55)

Note that for the case described we need general amplitudes 0 1kp≤ ≤ , 1

1n

kk

p=

=∑ (instead of

equal with each other, assumed at the previous sections) for every kernel characteristic functional. These amplitudes will be assigned from the approximation algorithm, every time that a concentration parameter exceeds the threshold value. Now let us derive the ordinary

Page 207: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 195

differential equations associated with every Gaussian kernel, for the specific case of the one dimensional system with a linear and cubic term. Equations (55) will take the form

( )

( )( )

( )( )

( )( )

( )3

3, , , ,k k k k

xy xy xy xyd u v k u v a u v u vdt u t u t v tu t

δ δ δ δ

δ δ δδ+ − =

Y Y Y Y (56)

for every ,c cu C v C∞ ∞∈ ∈ ( )1,k n= … . We observe that every characteristic functional controls

a specific region of the probability space and evolutes independently from the others. Note also, that the interchange of probability between the kernels characteristic functionals is carried on completely from the approximation algorithm and only at the case when some concentration parameter exceeds the threshold value. Although this assumption is very loose, since the probability must interchange continuously (with respect to time), between the kernels, it gives us a first picture of the method. We emphasize that with the analytical calculation of the functional integrals the interchange of probability occurs continuously since the system of ordinary differential equations governing the kernel parameters is coupled. Hence, for this case the interchange of probability is introduced from the nondiagonal terms of equation (42).

An important feature of the discussed method is the ability for parallelized computations. Parallelization techniques can be physically applied both to the dynamical evolution of the kernels and the optimization algorithm. For the first case the algorithm can take advantage of the independent evolution of every kernel. Additionally for the optimization algorithm we can approximate the given density (since is already splitted into Gaussian kernels) by independent group of kernels with small variance. Hence, in this way we can succeed very fast computations for system of higher dimensions, with various excitations.

To derive a set of ordinary differential equations for every FDE (56) we will get moment equations using the result of Section 3 of the present chapter. Hence, since every characteristic functional

k

xyY is Gaussian, the functional differential equation (56) will be fulfilled if the

three first moment equations are satisfied. By direct application of Section 5.3.2 of the present Chapter and Appendix I lead us to the following subset of ordinary differential equations for every kernel ( )1,k n= …

( ) ( ) ( ) ( ) ( ) ( )3

, , , , , ,3 ,k k k k k kX t X X XX X Ym t k m t am t B t t a m t m t⎡ ⎤+ + + =⎢ ⎥⎣ ⎦

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ), , , , , , , ,, , 3 , 3 , , ,k k k k k k k kXX tt XX X XX X XX XX XYB t s kB t s am t B t s m t aB t t B t s B s t+ + + =

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ), , , , , , , ,, , 3 , , 3 , ,k k k k k k k kXY tt XY XX XY X XY X YYB t s kB t s aB t t B t s am t B t s m t B t s+ + + =

for all ( ),t s I I∈ × . We will not come into details concerning the numerical solution of the

above equations. We just emphasize that special treatment was needed, since as we note, there is no participation of the s variable to the dynamics of the equations. Hence we have to solve the above system for every value of s. Let us now give a precise description of the optimization algorithm used for the rebuild of a measure (more precisely density) with kernels with smaller variance. As we mention we will have to approximate a given superposition of Gaussian densities by another superposition with the restriction of a maximum variance. We thus have the following optimization problem,

Page 208: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

196 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

0 1

22 2

, , 1 1 00

1 1min exp exp2 22 2M

i i i

N Mj ji i

A m i jii

A x mA x m dxσ σ σσ π σ π=

+∞

= =−∞

⎛ ⎞⎡ ⎤⎧ ⎫ ⎧ ⎫ ⎟⎜ ⎪ ⎪ ⎪ ⎪⎛ ⎞ ⎛ ⎞− ⎟−⎜ ⎪ ⎪ ⎪ ⎪⎢ ⎥⎟ ⎟⎪ ⎪ ⎪ ⎪⎜ ⎜ ⎟⎜ ⎟ ⎟− − − ⎟⎜ ⎜⎨ ⎬ ⎨ ⎬⎢ ⎥⎜ ⎟ ⎟ ⎟⎜ ⎜⎟ ⎟⎜ ⎪ ⎪ ⎪ ⎪⎜ ⎜ ⎟⎢ ⎥⎝ ⎠ ⎝ ⎠⎜ ⎪ ⎪ ⎪ ⎪ ⎟⎟⎜ ⎪ ⎪ ⎪ ⎪⎩ ⎭ ⎩ ⎭⎣ ⎦⎝ ⎠∑ ∑∫

where 1

, ,N

i i i iA m σ

= are the amplitudes, mean values, variances of the given superposed

kernels and 0 1, , M

i i iA m σ

= are the parameters of the kernels that approximates the given

density. We also restrict ourselves, for simplicity, to the case of a common variance 0σ for all the Gaussian kernels. More details for the optimization algorithm can be found at Appendix II. In what follows we will examine the case of the above system with two different set of parameters. Hence a white noise excitation case will be studied with the system parameters described by the table

Non-dimensional parameters Case I Case II

k 1 1

a 1 -1

White noise intensity 1 1

We note that for the first case the nonlinear system has one stable fixed point located at zero. As the parameter a decreases a pitchfork bifurcation occurs at zero and two symmetric stable

points appear at ka

± and the fixed point at zero becomes unstable. Hence we have the

bifurcation diagram

1.51 0.50 -0.5 -1 -1.5 -1.5

-1

-0.5

0

0.5

1

1.5

Bifurcation parameter

Fixe

d po

int x

Bifurcation Diagram

The above analysis will also be confirmed from the resulted densities. Thus we expect that for the case II, the probability will concentrate around the fixed points and hence we will have a bimodal distribution. For the first case we expect a unimodal distribution with the probability concentrated at zero. Since the above fixed points are global attractors we expect to have these

Page 209: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 197

results after some time, independent of the initial density. For the numerical results presented above we are using two cases of initial distributions. The first case is consisting of a Gaussian distribution cantered at zero with variance 0.1σ= . The second case is a convex superposition of two Gaussian distributions with characteristics 1 2 1 20, 0.6, 0.1, 0.6m m σ σ= = = = and amplitudes 1 0.4a = and 2 0.6a = respectively.

( ),

xf t x for the case of Gaussian White Noise excitation with one stable fixed point at 0 and initial conditions of the first type.

The first plot shows with solid lines the mean values of the kernel probability measures. Their shape verifies the stable property of the fixed point at 0. We can see that the initial distribution is spreading and approaches very the final steady state distribution. A very interesting point is that the concentration of probability is not stopping as can be seen from the outer kernels. This behaviour reveals the necessity for the study of the tail behaviour, since a representation which predicts the tail revolution will overcome the mentioned problem.

Page 210: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

198 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( ),

xf t x for the case of Gaussian White Noise excitation with one stable fixed point at 0 and initial conditions of the second type.

Again we can see from the first plot that the kernels mean values are attracted from the stable fixed point (global attractor) In this case, since the initial distribution is more complex more kernels are used for the efficiently representation of the solution. Notice that the interchange of probability between kernels takes place every 0.2 seconds, which is every time we have discontinuity of the mean value curves.

Notice that since the steady state probability distribution has an asymptotic behaviour of

( )4xo e− and simultaneously we are using Gaussian representations, we will constantly have the

previously described behaviour of the outer kernels. Hence, it is very important to use suitable kernel representations with identical tail behaviour. However, we can see that the above representation produces satisfactory results for the main probability mass.

Page 211: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.4. KERNEL REPRESENTATIONS FOR THE CHARACTERISTIC FUNCTIONAL 199

( ),

xf t x for the case of Gaussian White Noise excitation with two stable fixed points at 1± and initial conditions of the first type.

In this case we examine the behaviour of the system for different system parameters. Hence, we have for the underlying Hamiltonian system two stable fixed points at 1± while the fixed point at 0 has become unstable. For the first case of initial conditions we have full symmetry with respect to the spatial 0. The symmetry is maintained for the kernel mean values as well. Note also that, despite the discontinuities of the kernels mean values the probability distribution is continuous (second plot).

After a short time we can see that the probability is concentrated near the stable fixed points and a steady state is approached. Again we can see the higher concentration of outer kernels, something that can be explained from the tail evolution problems. It must also be noted that the rapid spread of probability is explained from the chosen white noise intensity.

Page 212: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

200 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

( ),

xf t x for the case of Gaussian White Noise excitation with two stable fixed points at 1± and initial conditions of the second type.

This is the most interest case. We start with a bimodal initial condition and we are approaching a bimodal steady state distribution. From the first plot we note that until the time of 0.4 seconds the starting bimodal distribution is transforming to a unimodal one. As the system evolves, the probability mass is transferred from the stable fixed point located at -1 to the stable fixed point at 1. This is a very interesting phenomenon since between the two stable fixed points is an unstable one which divides the phase space of the deterministic problem and does not allow the transition of orbits from the one sine to another. This is not the case for the probability mass, were we know that after a sufficient time period the probability of the two stable fixed points regions will be equal.

Page 213: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

5.5. REFERENCES 201

5.5. References ATHANASSOULIS, G.A. & GAVRILIADIS, P.N., 2002, The Truncated Hausdorff Problem solved by using Kernel Density Functions. Probabilistic Engineering Mechanics, 17, 273-291. BATCHELOR, G.K., 1953, Theory of Homogeneous Turbulence. Cambridge University Press. BERAN M.J., 1968, Statistical Continuum Mechanics. Interscience Publishers. CARTAN, H., 1971, Differential Calculus. Kershaw Publishing. DI PAOLA, M. & SOFI, A., 2002, Approximate Solution of the Fokker-Planck-Kolmogorov Equation. Probabilistic Engineering Mechanics, 17, 369-384. EGOROV, A.D. & SOBOLEVSKY, P.I. & YANOVICH, L.A., 1993, Functional Integrals: Approximate Evaluation and Applications. Kluwer Academic Publishers. FALKOVICH, G. & LEBEDEV, V., 1997, Single Point Velocity Distribution in Turbulence. Physical Review Letters, 79, 4159-4161. GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1996, Theory of Random Processes. Dover Publications. GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1974, The Theory of Stochastic Processes I. Spinger. GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1975, The Theory of Stochastic Processes II. Spinger. GIΚHMAN, I.I. & SKOROΚHOD, A.V., 1975, The Theory of Stochastic Processes III. Spinger. GUO-KANG ER., 2000, Exponential Closure Method for some Randomly Excited Non-Linear Systems. International Journal of Non-Linear Mechanics, 35, 69-78. HOPF, E., 1952, Statistical Hydromechanics and Functional Calculus. Journal of Rational Mechanics and Analysis 1, 87-123. JOE, H., 1997, Multivariate Models and Dependence Concepts, Chapman & Hall, London. KONSTANT, D.G. & PITERBARG, V.I., 1993, Extreme values of the cyclostationary gaussian random process. Journal of Applied Probability. KOTULSKI, Z. & SOBCZYK, K., 1988, On the Moment Stability of Vibratory Systems with Random Parametric Impulsive Excitation. Arch. Mech., 40,4, 465-475. LANGOUCHE, F. & ROEKAERTS, D. & TIRAPEGUI, E., 1979, Functional Integral Methods for Stochastic Fields. Physica, 95A, 252-274. MONIN. A.S. & YAGLOM, A.M., 1965, Statistical Fluid Mechanics: Mechanics of Turbulence.

MIT Press. MOSBAH, H. & FOGLI, M., 2003, An Original Approximate Method for Estimating the Invariant Probability Distribution of a Large Class of Multi-Dimensional Nonlinear Stochastic Oscillators. Probabilistic Engineering Mechanics, 18, 165-170. POLIDORI, D.C. & BECK, J.L. & PAPADIMITRIOU, C., 2000, A New Stationary PDF Approximation for Non-Linear Oscillators. International Journal of Non-Linear Mechanics, 35, 657-673. PRADLWARTER, H.J., 2001, Non-Linear Stochastic Response Distributions by Local Statistical Linearization. International Journal of Non-Linear Mechanics, 36, 1135-1151. PROHOROV, YU.V., 1956, Convergence of Random Processes and Limit Theorems in Probability Theory. Theory of Probability and Its Applications, I, 157-214. PROHOROV, YU.V. & ROZANOV, YU.A., 1969, Probability Theory. Springer.

Page 214: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

202 CHAPTER 5 PROBABILISTIC ANALYSIS OF STOCHASTIC DYNAMICAL SYSTEMS

PROHOROV, YU.V. & SAZONOV, V.V., 1961, Some Results Associated with Bochner’s Thoerem. Theory of Probability and Its Applications, VI, 82-87. PUGACHEV, V.S. & SINITSYN , 1987, Stochastic Differential Systems. John Wiley &Sons. ROSEN, G., 1967, Functional Integration Theory for Incompressible Fluid Turbulence. The Physics of Fluids, 10, 2614-2619. ROSEN, G. & OKOLOWSKI, J.A. & ECKSTUT, G., 1969, Functional Integration Theory for Incompressible Fluid Turbulence. II. Journal of Mathematical Physics, 10, 415-421. ROSEN, G., 1960, Turbulence Theory and Functional Integration I. The Physics of Fluids, 3, 519-524. ROSEN, G., 1960, Turbulence Theory and Functional Integration II. The Physics of Fluids, 3, 525-528. SKOROKHOD, A.V., 1974, Integration in Hilbert Spaces. Spinger. SKOROKHOD, A.V., 1984, Random Linear Operators. D. Reidel Publishing Company. SOBCZYK, K., 1991, Stochastic Differential Equations. Kluwer Academic Publishers. SOONG, T.T., 1973, Random Differential Equations. Academic Press. TATARSKII, V.I., 1995, Characteristic Functionals for one Class of Non-Gaussian random Functions. Waves in Random Media, 5, 243-252. VISHIK, M.J. AND FURSIKOV, A.V., 1980, Mathematical Problems of Statistical Hydromechanics.

Kluwer Academic Publishers.

Page 215: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

203

Subjects for Future Research As it has been seen in the text there are numerous threads for future investigations. One very interesting direction would be the application of the results presented for the numerical solution of stochastic differential equations describing complex dynamical systems. Some topics on this direction of future research should be

• Implementation of numerical method for the solution of the partial differential equations derived at Section 5.3.4 through the reduction of the Functional differential equation. In this way we would have results for the joint characteristic function of the response, its derivatives and the excitation at various time instants. Probably the most convenient way should be the use of kernel characteristic functions representations with given marginals.

• Direct application of the proposed method of Section 5.4.4 without any simplifications. In this way we could solve the full statistical problem and get the complete probability structure through the characteristic functional. This direction should require analytical computation of infinite dimensional integrals by applying the theorems presented at Section 5.4.1.

• Calculation of the steady-state probability measures associated with a dynamical system, by means of suitable kernel characteristic functional representation. This representation should describe a stationary measure and its application for the numerical computation of its parameters might use a variation of the proposed method of Kernel Characteristic Functionals.

• Improvement of the simplified method, presented at Section 5.4.5, and especially of the optimization algorithm described at Appendix III for faster computation of the probabibility densities as well as for numerical solution of higher dimensional problems. In this direction should be take in account the embedded characteristic of the method for parallel computation, discussed at Section 5.4.5.

Another research thread of great importance should be the analytical investigation of dynamical systems by means of the kernel probability measure representations. More specifically some topics on this direction could be

• Study of the evolution of the asymptotic behaviour (tail) of the response, directly thorough the analytical study of the characteristic functional and more specifically thought the study of the asymptotic behaviour of the characteristic functional at the neighbourhood of zero. In this direction a useful tool might be the Tatarskii characteristic functional. The above topic is of great importance since the knowledge of the tail behaviour will allow solving the problem with few kernels.

• Analytical study of the behaviour of the probability structure of the response by studying analytically the parameters of the kernel characteristic functionals in relation with critical system parameters. An interesting point on this would be how the bifurcation analysis of the underlying deterministic system would/could predict the probability structure of the response through the kernels. In this way we could study more clearly the effects of nonlinearity and stochasticity simultaneously.

Page 216: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

204 SUBJECTS FOR FUTURE RESEARCH

Finally an important topic concerning the study of kernel characteristic functionals is the proof of theorems analogous to theorem 4.13 of Section 5.4.3 due to D.G. Konstant and V.I. Piterbarg, concerning the probability structure of the extremes of stochastic processes with known probability measure. Hence in this way we would have necessary conditions for a probability measure/characteristic functional to be admissible for the solution of the statistical problems.

Page 217: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

205

Appendix I

Volterra Derivatives of Gaussian Characteristic Functional For the sake of convenience we shall now compute, the essential derivatives of Gaussian

functional for our analysis. Let,

( ) 1 1, exp , , , , ,2 2X Y XX YY XYu v i m u i m v C u u C v v C u v

⎛ ⎞⎟⎜= + − − − ⎟⎜ ⎟⎜⎝ ⎠Y

Then we have the Volterra derivatives computed at ( )0,0 .

( )( )

1

1

0,0 1X

t

m tu iδ

δ=

Y

( )( )

1

1

0,0 1Y

t

m tv iδ

δ=

Y

( )( ) ( ) ( )

1 2

2

1 2 1 22

0,0 1 ,XX X Xt t

B t t m t m tu u iδ δ

δ= +

Y

( )( ) ( ) ( )

1 2

2

1 2 1 22

0,0 1 ,XY X Yt t

B t t m t m tu v iδ δ

δ= +

Y

( )( ) ( ) ( )

1 2

2

1 2 1 22

0,0 1 ,YY Y Yt t

B t t m t m tv v iδ δ

δ= +

Y

( )( ) ( ) ( ) ( ) ( ) ( )

( ) ( ) ( )1 2 3

3

1 2 3 2 1 3 3 1 23

1 2 3

0,0 1 , , ,X XX X XX X XXt t t

X X X

m t B t t m t B t t m t B t tu u u i

m t m t m t

δ δ δδ

= + +

+

Y

( )( ) ( ) ( ) ( ) ( ) ( )

( ) ( ) ( )1 2 3

3

1 2 3 2 1 3 3 1 23

1 2 3

0,0 1 , , ,X XY X XY Y XXt t t

X X Y

m t B t t m t B t t m t B t tu u v i

m t m t m t

δ δ δδ

= + +

+

Y

Page 218: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

206 APPENDIX I

( )( ) ( ) ( ) ( )

( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )

1 2 3 4

4

1 3 2 4 2 3 1 44

1 2 3 4 2 1 3 4

1 2 4 3 2 1 4 3

3 1 2 4 1 2 3 4

1 2 3 4 1 3 4 2

0,0 1 , , , ,

, ,

, ,

,

, , ,

XX XX XX XXt t t t

X XX X X XX X

X XX X X XX X

X XX X X X X X

XX XX X XX X

B t t B t t B t t B t tu u u u i

m t B t t m t m t B t t m t

m t B t t m t m t B t t m t

m t B t t m t m t m t m t m t

B t t B t t m t B t t m t

δ δ δ δδ

= + +

+ + +

+ + +

+ + +

+ +

Y

( )( ) ( ) ( ) ( )

( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )

1 2 3 4

4

1 3 2 4 2 3 1 44

1 2 3 4 2 1 3 4

1 2 4 3 2 1 4 3

3 1 2 4 1 2 3 4

1 2 3 4 1 3 4 2

0,0 1 , , , ,

, ,

, ,

,

, , ,

XX XY XX XYt t t t

X XX Y X XX Y

X XY X X XY X

X XX Y X X X Y

XX XY X XY X

B t t B t t B t t B t tu u u v i

m t B t t m t m t B t t m t

m t B t t m t m t B t t m t

m t B t t m t m t m t m t m t

B t t B t t m t B t t m t

δ δ δ δδ

= + +

+ + +

+ + +

+ + +

+ +

Y

The above calculations will for the derivation of moment equations from the FDE. These equations correspond to the Gaussian closure technique. In Section 5.4.5 of the Chapter 5 they will be used for the application of the Gaussian closure technique locally in the phase space and will allows us to get a first picture of the proposed method.

Page 219: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

207

Appendix II

Optimization Algorithm We have the following optimization problem,

0 1

22 2

, , 1 1 00

1 1min exp exp2 22 2M

i i i

N Mj ji i

A m i jii

A x mA x m dxσ σ σσ π σ π=

+∞

= =−∞

⎛ ⎞⎡ ⎤⎧ ⎫ ⎧ ⎫ ⎟⎜ ⎪ ⎪ ⎪ ⎪⎛ ⎞ ⎛ ⎞− ⎟−⎜ ⎪ ⎪ ⎪ ⎪⎢ ⎥⎟ ⎟⎪ ⎪ ⎪ ⎪⎜ ⎜ ⎟⎜ ⎟ ⎟− − − ⎟⎜ ⎜⎨ ⎬ ⎨ ⎬⎢ ⎥⎜ ⎟ ⎟ ⎟⎜ ⎜⎟ ⎟⎜ ⎪ ⎪ ⎪ ⎪⎜ ⎜ ⎟⎢ ⎥⎝ ⎠ ⎝ ⎠⎜ ⎪ ⎪ ⎪ ⎪ ⎟⎟⎜ ⎪ ⎪ ⎪ ⎪⎩ ⎭ ⎩ ⎭⎣ ⎦⎝ ⎠∑ ∑∫

where 1

, ,N

i i i iA m σ

= are the amplitudes, mean values, variances of the given superposed

kernels and 0 1, , M

i i iA m σ

= are the parameters of the kernels that approximates the given

density. We also restrict ourselves, for simplicity, to the case of a common variance 0σ for all the Gaussian kernels. Hence we will have the restrictions,

0LB UBσ< < and 1

1M

jj

A=

=∑

Expanding the above, we have the function for minimization

( )22

01 1

1 1, , exp2 2 2

N Ni j ji

ii j i j i j

A A x mx mF A M dxσσ σ π σ σ

+∞

= =−∞

⎧ ⎫⎪ ⎪⎛ ⎞⎛ ⎞ −⎪ ⎪− ⎟⎜⎟⎪ ⎪⎜ ⎟⎟ ⎜= − − −⎜⎨ ⎬⎟⎟ ⎜⎜ ⎟⎟⎪ ⎪⎜ ⎟⎜⎝ ⎠ ⎝ ⎠⎪ ⎪⎪ ⎪⎩ ⎭∑∑∫

2 2

1 1 0 0

1 12 exp2 2 2

N Mi j ji

i j i i

A A x mx m dxσ σ π σ σ

+∞

= =−∞

⎧ ⎫⎪ ⎪⎛ ⎞ ⎛ ⎞−−⎪ ⎪⎟ ⎟⎪ ⎪⎜ ⎜⎟ ⎟− − − +⎜ ⎜⎨ ⎬⎟ ⎟⎜ ⎜⎟ ⎟⎪ ⎪⎜ ⎜⎝ ⎠ ⎝ ⎠⎪ ⎪⎪ ⎪⎩ ⎭∑∑∫

2 2

1 1 0 0 0 0

1 1exp2 2 2

M Mi j ji

i j

A A x mx m dxσ σ π σ σ

+∞

= =−∞

⎧ ⎫⎪ ⎪⎛ ⎞ ⎛ ⎞−−⎪ ⎪⎟ ⎟⎪ ⎪⎜ ⎜⎟ ⎟+ − −⎜ ⎜⎨ ⎬⎟ ⎟⎜ ⎜⎟ ⎟⎪ ⎪⎜ ⎜⎝ ⎠ ⎝ ⎠⎪ ⎪⎪ ⎪⎩ ⎭∑∑∫ .

The integrations can be carried out analytically. Hence we will have,

( )( )

( )2

0 2 22 21 1

, , exp2

N Nj ii j

ii j j ii j

m mA AF A Mσ

σ σσ σ π= =

⎧ ⎫⎪ ⎪−⎪ ⎪⎪ ⎪= − −⎨ ⎬⎪ ⎪++ ⎪ ⎪⎪ ⎪⎩ ⎭∑∑

( )

( )2

2 22 21 1 00

2 exp2

M Nj ii j

i j ii

m mA Aσ σσ σ π= =

⎧ ⎫⎪ ⎪−⎪ ⎪⎪ ⎪− − +⎨ ⎬⎪ ⎪++ ⎪ ⎪⎪ ⎪⎩ ⎭∑∑

( )

( )2

2 22 21 1 0 00 0

exp2

M Mj ii j

i j

m mA Aσ σσ σ π= =

⎧ ⎫⎪ ⎪−⎪ ⎪⎪ ⎪+ −⎨ ⎬⎪ ⎪++ ⎪ ⎪⎪ ⎪⎩ ⎭∑∑ .

To minimize the above quantity we fixed the common variance 0σ and the mean values , 1, 2,im i M= … defined by

( ) ( ) ( ) ( ) ( )0 0 0 0 03 1min max max min 3max2 1im m m m

Mσ σ

⎡ ⎤ ⎡ ⎤⎢ ⎥= − + − +⎣ ⎦⎢ ⎥ −⎣ ⎦

Page 220: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

208 APPENDIX II

Thus the minimization problem is equivalent with the set of equations,

ij j iB A C=

where,

( )2

200

1 exp22 2j i

ij

m mB

σσ π

⎧ ⎫⎪ ⎪−⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎪ ⎪⎩ ⎭

1, , 1, ,i M j M= =… …

( )2

21 00

exp22 2

Nk ik

ik

m mACσσ π=

⎧ ⎫⎪ ⎪−⎪ ⎪⎪ ⎪= −⎨ ⎬⎪ ⎪⎪ ⎪⎪ ⎪⎩ ⎭∑ 1,i M= …

And the value of the minimizing function will be,

( )0, , 2i i ij j i i i ij jF A M A D A C A A B Aσ = − +

To the linear system of equations we must add the condition

1

1M

jj

A=

=∑

Thus we have a least-square problem, with 1M + equations. For our case we make the function

( ) ( )0 , 0, , ,i optG M F A Mσ σ=

with the amplitudes ,i optA defined by the least-square problem. Thus, for every M we

minimize the function ( )0 ,G Mσ with respect to 0σ and then we choose the optimum

0, ,, ,opt opt i optM Aσ .

Page 221: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

209

Appendix III

Application on Ship Stability and Capsizing on Random Sea

The Probabilistic Model The current regulations and criteria for assuring the stability of a ship and preventing it from capsizing (see the Codes of the International Maritime Organization (IMO) and of the Germanischer Lloyd) are empirical and are based on the properties of the righting lever of the ship, taking only hydrostatic forces into account. For details and criticism see Kreuzer and Wendt (p. 1836). Those static criteria which neglect the motion of the ship as well as the impact of seaway and wind do obviously not guarantee total stability as they cannot prevent at least 30 ships yearly of tonnage greater than 500 GT from getting lost due to severe weather conditions. Hence researchers agree that those criteria have to be modified by using hydrodynamic models of the ship–sea system, by describing the sea as a random field and by analyzing the ship as a rigid body with 6 degrees of freedom using methods of nonlinear dynamics and the theory of random dynamical systems.

The present state of (deterministic as well as stochastic) ship dynamics research has been recently documented in the Theme Issue of the Phil. Trans. Royal Soc. London (Series A) entitled “The nonlinear dynamics of ships”, edited by Spyrou and Thompson. The volume includes the extended overview by Spyrou and Thompson, presenting also the historical development of the field as well as future directions of research, the prime one being the “integration of the probabilistic character of the seaway into the study of nonlinear dynamics” – which we will try to address.

In the following analysis we will not consider the linear model for rolling equation since it is the result of the assumptions that the wave is small. Hence the linearized model describes small ship motions. Transition to large roll angles requires the introduction of nonlinear terms into the ship motion equations. Unfortunately, there is no theory available to derive ship motions equations in a consistent manner. So we used a physics based approach, considering forces of a different nature separately and trying to combine them into one model of nonlinear rolling, suitable for capsizing study.

The forces acting on a ship in a seaway can be classified in the following manner BELENKY L.V. AND SEVASIANOV N.B., (Stability and Safety of Ships):

• Inertial hydrodynamic forces (e.g. added mass) • Wave damping hydrodynamic forces • Viscous damping forces • Hydrostatic forces • Wave excitation forces • Other forces, including

Page 222: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

APPLICATION ON SHIP STABILITY AND CAPSIZING ON RANDOM SEA 210

Aerodynamic forces Appendage forces (including dumping form rollers and anti-rolling fins) Forces due to fluid motion in internal tanks including anti-roll tanks Hull “manoeuvring” forces, propeller thrust, etc.

There are other systems to classify these forces. This classification assigns forces of a different nature to the same category. However, it reflects a role that force plays in ship motions.

For our analysis we will use the simplified, but nonlinear, rolling equation

( ) ( ) ( )44 2xx EI a N mg GZ mg GM a tφφ φ φ+ + + ⋅ = ⋅ ⋅ (1)

where • We have neglect nonlinear damping forces • We have neglect the effect of bulwark entering into water during heeling • The change of GZ curve due to irregular seas • we have neglect effects of special possibly anti-rolling devices • Nφ is a linear dumping coefficient • ( )Ea t describes the excitation due to wave forces

To “randomize” a capsizing model we need to : • Introduce a model for the stochastic excitation due to wave forces • Calculate probabilistic characteristics of rolling, roll velocities and other processes

included in the model of capsizing • Calculate probabilistic characteristics of the “carrier process” – the process, by which

crossing of the boundary is associated with the capsizing • Calculate the probability of crossing

In the direction described above we will express the stochastic forces using the well-known model of Pierson – Longuet Higgins Model (see Chapter 2). Hence we assume that ( )Ea t is a normal stationery stochastic process with given spectrum or correlation function. Moreover we assume that all the necessary ship’s characteristics (such as GZ curve, GM, etc.) are known.

For the probabilistic analysis we will first write eq. (1) in a non-dimensional, state-space form, i.e.

( ) ( )2 Wb V F t

φ θ

θ θ φ

=

′+ + = (2)

where, • ( )44xxb N I aφ= +

• ( )( )

( )44xx

mgV GZ dI a

φ ϕ ϕ=+ ∫

• ( ) ( ) ( )44W E xxF t mg GM a t I a= ⋅ ⋅ +

Page 223: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

211 APPENDIX III

Functional Differential Equation for the Rolling Equation We shall derive the corresponding functional differential equation for the stochastic system (2). We assume that

a) ( ); :WF I×Ω→i i is a normal, stationary, mean square continuous stochastic process

defined onto the probability space ( )( ), ,Ω ΩU c with the associated probability

space ( )( ), ,c cC C∞ ∞FU c , and with characteristic functional FY .

b) ( ) ( )( ) 20 0, :θ φ Ω→i i are random variables with the associated probability space

( )( )0 0

2 2, , θ φU c and characteristic functions 0 0θ φY .

c) ( ):V I′ × →i is a measurable, deterministic function that can be expanded into a

series of the form

( ) 31 3V a aφ φ φ′ = + (3)

Applying Theorem 2.1 of Chapter 5 we will have the functional differential equation

( )( )

( )( )

, , , ,F Fu u v u u vddt u t u t

φθ φ θ φθ φ θ

φ θ

δ δ

δ δ=

Y Y (4a)

( )( )

( )( )

( )( )

( )( )

( )( )

1

3

3 3

, , , , , ,2

, , , ,

F F F

F F

u u v u u v u u vd b adt u t u t u t

u u v u u va

v tu t

φθ φ θ φθ φ θ φθ φ θ

θ θ φ

φθ φ θ φθ φ θ

φ

δ δ δ

δ δ δ

δ δ

δδ

+ + +

− =

Y Y Y

Y Y (4b)

Where ( ), ,F u u vφθ φ θY is the joint characteristic functional of the stochastic processes , ,WF φ θ .

Concerning the initial conditions of the problem we have

( ) ( )( )( ) ( ) ( )

( ) ( )( ) ( ) ( )

0 0

0 0

0 00 0

, ,0

exp , , ,0 , ,

exp , , ,c c

c

F t t

t t F

C C

F

C

i i i F d d dF

i t t d d dF

φθ φ θ

φ θ φθ

φ θ φθ θ φ φ θ

υ δ υ δ

φ υ δ θ υ δ φ θ

υ φ υ θ φ θ υ υ

∞ ∞

=

= + + =

= + =

∫ ∫

i i

i i

Y

c

c Y

(6)

Page 224: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

APPLICATION ON SHIP STABILITY AND CAPSIZING ON RANDOM SEA 212

PDE for the joint probability density function of the r.v.’s ( ) ( ) ( ), ,WF t t tφ θ

Following the analysis of Chapter 5/Section 5.3.4 we have the reduction of the Functional Differential Equation (4a) and (4b) to the PDE

( ) ( ) ( ) ( )

( ) ( )

1

3

3 3

, , , , , , , ,2

, , , ,, , , ,

F F F F

s t

F F

t t s t t t t t t t t tb a

t

t t t t t ta v t I

v

φθ φθ φθ φθφ θ θ

θ θ φ

φθ φθθ θ θ φ

φ

φ φ φ φυ υ υ

υ υ υ

φ φυ υ υ υ

υ

=

∂ ∂ ∂ ∂− − − +

∂ ∂ ∂ ∂

∂ ∂+ = ∈ ∈

∂ ∂

(7)

where ( ) ( )1 2 3 1 2 3, , , ; , ; ,F Ft t t t t v tφθ φθ φ θφ φ υ υ= is the joint characteristic function of the random

variable (vector) ( ) ( ) ( )( )1 2 3, , Wt t F tφ θ .

Moreover the additional condition holds

( ) ( ) ( )( )1 2 1 20, ;0, ; , 0, ;0, ; , ,F F Ft t v t t t v t v t vφθφ φ δ= = ⋅ − ∈iY (8)

and the initial condition

( ) ( ) ( )0 03,0; ,0;0, ,0; ,0 , , ,F tφθ φ θ φθ φ θ φ θ φ θ φ θφ υ υ φ υ υ υ υ υ υ= = ∈Y (9)

Hence we have derived a partial differential equation for the joint characteristic function ( ) ( ) ( )( ), , Wt t F tφ θ .

An alternative form of eq. (7)-(9) can be derived, by taking their Fourier transform. In this case we will have

( )

( ) ( ) ( )31 3

, ,, , 2 , , ,F

F F

s t

f t t sf t t t b a a F f t t t t I

tφθ

φθ φθθ θ φ φφ θ

=

∂ ∂ ∂ ⎡ ⎤⎡ ⎤+ + + + + ∈⎢ ⎥⎣ ⎦ ⎣ ⎦∂ ∂ ∂ (10)

where ( ) ( )1 2 3 1 2 3, , , ; , ; ,F Ff t t t f t t F tφθ φθ φ θ= is the joint probability density function of the

random variable (vector) ( ) ( ) ( )( )1 2 3, , Wt t F tφ θ .

Moreover the additional condition holds

( ) ( ) ( )( )1 2, ; , ; , , ,i FF F Ff t t F t d d f F t t d Fe ν

φθ φ θ φ θ ν δ ν∞

−∞

= = ⋅ − ∈∫∫ ∫ iY (11)

and the initial condition

( ) ( ) ( )0 0

1,0; ,0; , ,0; ,0 , , ,Ff F t dF fφθ φθ φ θ φ θφ θ φ θ υ υ φ θ− ⎡ ⎤= = ∈⎢ ⎥⎣ ⎦∫ YF

(12)

Using this information, we can additionally determine the probability structure of the second derivative ( )tφ by means of a simple calculation using eq. (4b). Having this information we can calculate the probability of capsizing, thus, determining the ship stability and safety.

Page 225: NATIONAL TECHNICAL UNIVERSITY OF ATHENSsandlab.mit.edu/Papers/Diploma_thesis_sapsis.pdf · 5.1.1. General Problems of the Theory of Stochastic Differential Equations 119 5.1.2. The

213 APPENDIX III

References BELENKY L.V. AND SEVASIANOV N.B., 2003, Stability and Safety of Ships, Volume II : Risk of Capsizing, Elsevier. LLOYD, GERMANISCHER., 1992, Vorschriften und Richtlinien des Germanischen Lloyd. Volume 1: Schiffstechnik, Teil 1, Kapitel 1, Selbstverlag des Germanischen Lloyd, Hamburg. INTERNATIONAL MARITIME ORGANIZATION. 1995, Code on Intact Stability for All Types of Ships Covered by IMO Instruments. Resolution A.749(18). IMO, London. KREUZER, E. AND WENDT, M., 2000, ‘Ship capsizing analysis using advanced hydrodynamic modelling’, Philosphical Transactions of Royal Society London A 358, 1835–1851. SPYROU, K. J. AND THOMPSON, J. M. T. (eds.), 2000, ‘The nonlinear dynamics of ships’, Philosophical Transactions of Royal Society London, Series A (Theme Issue) 358. SPYROU, K. J. AND THOMPSON, J. M. T., 2000, ‘The nonlinear dynamics of ship motions: A field overview and some recent developments’, Philosophical Transactions of Royal Society London, Series A 358, 1735–1760.