stochastic analysis - tum · stochastic analysis prof. dr. nina gantert lectureattuminws2011/2012...
TRANSCRIPT
![Page 1: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/1.jpg)
Stochastic Analysis
Prof. Dr. Nina Gantert
Lecture at TUM in WS 2011/2012
June 13, 2012
Produced by Leopold von Bonhorst and Nina Gantert
![Page 2: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/2.jpg)
Contents
1 Definition and Construction of Brownian Motion 3
2 Some properties of Brownian Motion 7
3 The Cameron-Martin Theorem and the Paley-Wiener stochastic integral 13
4 Brownian Motion as a continuous martingale 17
5 Stochastic integrals with respect to Brownian Motion 19
6 Ito’s formula and examples 24
7 Pathwise stochastic integration with respect to continuous semimartingales 27
8 Cross-variation and Ito’s product rule 32
9 Stochastic Differential Equations 35
10 Girsanov transforms 38
2
![Page 3: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/3.jpg)
1 Definition and Construction of Brownian Motion
1.1 Historic origin
Brown (1827): Movement of a pollen in a liquid
Bachelier (1900): Model for stock market fluctuations
Einstein (1905): Motion of a particle
1.2 Heuristic description with symmetric random walks
Y1, Y2, ... iid with P [Yi = 1] = 12= P [Yi = −1],
Sk =∑k
i=1 Yi, k = 1, 2, ..., S0 = 0. Fix N ∈ N
Rescale the process to the time interval [0, 1]:
X kN=
1√NSk, k = 0, 1, 2 . . . , N
Then,
(i) X0 = 0.
(ii) For 0 ≤ t0 < t1 < t2 < ... < tm ≤ 1, ti = kiN, ki ∈ 0, 1, ..., N, Xti − Xti−1
are
independent and E[Xti −Xti−1] = 0 and
V ar(Xti −Xti−1) = V ar(
1√N
ki∑
j=ki−1+1
Yj) =1
N(ki − ki−1).
Due to the CLT, the laws of (Xti −Xti−1) converge to N(0, ti − ti−1) for N → ∞
(and kiN
−−−→N→∞
ti ∈ [0, 1]).
This motivates the following definition:
1.3 Basic Definitions
Definition 1.1 Brownian Motion (BM) is a stochastic process Bt(ω), (0 ≤ t ≤ 1) on
a probability space (Ω,A, P ) such that
(i) B0 = 0 P-a.s.
(ii) For 0 ≤ t0 < t1 < t2 < ... < tn ≤ 1, the increments Bti − Bti−1, 1 ≤ i ≤ n , are
independent with law N(0, ti − ti−1)
(iii) t → Bt(ω) is continuous for P-a.a.ω.
Definition 1.2 Let (Bt)0≤t≤1 be a BM on the probability space (Ω,A, P ).
Then, the image measure of P under the map
Ω → C[0, 1]
3
![Page 4: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/4.jpg)
ω → (Bt(ω))0≤t≤1
is the Wiener measure.
The Wiener measure is a prob. measure on (C[0, 1],F), with F = σ(gt : 0 ≤ t ≤ 1),where gt : C[0, 1] → R, gt(x) = x(t) (x ∈ C[0, 1]).
Interpretation:
Def. 1.1: t → Bt(ω) is a stochastic evolution in time.
Def. 1.2: (Bt)0≤t≤1 is a random variable with values in C[0, 1].
Theorem 1.3 Brownian motion exists.
There are different proofs of Theorem 1.3.
We give here a proof, which constructs ”linear interpolations” on the sets Dn = k2n
: 0 ≤k ≤ 2n. We follow the proof of Theorem 1.3. in [4].
Proof: Let D =⋃
n Dn and let (Ω,A, P ) be a probability space such that Zt, t ∈ D are
iid random variables on (Ω,A, P ), with law N(0, 1).
Let B0 := 0 and B1 := Z1. For each n ∈ N, we define the random variables Bs, s ∈ Dn
such that:
(1) For r < s < t, r, s, t ∈ Dn, Bt − Bs is independent of Bs − Br, and Bt − Bs has the
law N(0, t− s)
(2) The vectors (Bs, s ∈ Dn) and (Zt, t ∈ D \Dn) are independent.
For D0 = 0, 1, we are done.
Proceeding inductively, assume that we followed the construction for some n−1. We then
define Bs for s ∈ Dn \Dn−1 by
Bs =1
2(Bs− 1
2n+Bs+ 1
2n) +
1
2n+12
Zs.
The first term is the linear interpolation of B at the neighboring points of s in Dn−1.
Therefore, Bs is independent of (Zt, t ∈ D \Dn) and (2) is satisfied.
Moreover, since1
2(Bs+ 1
2n− Bs− 1
2n)
4
![Page 5: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/5.jpg)
depends only on (Zt, t ∈ Dn−1), it is independent of 1
2n+12Zs. By induction assump-
tion, both terms have law N(0, 12n+1 ). Hence, their sums Bs − Bs− 1
2nand their difference
Bs+ 12n
− Bs are iid with law N(0, 12n).
Exercise:
X and Y iid random variables with law N(0, σ2)
⇒ X + Y,X − Y are iid random variables with law N(0, 2σ2).
To see that all increments Bs − Bs− 12n, s ∈ Dn \ 0 are independent, it suffices to show
that they are pairwise independent, since the vector of increments is Gaussian. We saw
that Bs−Bs− 12n, Bs+ 1
2n−Bs (with s ∈ Dn \Dn−1) are independent. The other possibility
is that the increments are over intervals separated by some s ∈ Dn−1. Choose s ∈ Dj
with this property and j minimal, so that the two intervals are contained in [s − 12j, s]
and [s, s+ 12j].
By induction hypothesis, the increments over these two intervals of length 12j
are inde-
pendent, and the increments over the intervals of lengths 12n
are constructed from the
independent increments Bs − Bs− 1
2jand Bs+ 1
2j− Bs, respectively, using disjoint sets of
random variables (Zt, t ∈ Dn).
Hence they are independent ⇒ (1) is satisfied. This completes the induction.
Now, we interpolate between the dyadic points. More precisely, let
f0(t) =
Z1 t = 1
0 t = 0
linear in between.
and for each n ≥ 1,
fn(t) =
2−n+12 Zt t ∈ Dn \Dn−1
0 t ∈ Dn−1
linear between consecutive points in Dn
f0, f1, f2, ...are continuous functions and, ∀n and s ∈ Dn
Bs :=
n∑
j=0
fj(s) =
∞∑
j=0
fj(s). (1.1)
5
![Page 6: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/6.jpg)
We prove (1.1) by induction. (1.1) holds for n = 0. Suppose it holds for n − 1. Let
s ∈ Dn \Dn−1. Since for 0 ≤ j ≤ n− 1, the function fj is linear on [s− 12n, s+ 1
2n] we get
n−1∑
j=0
fj(s) =
n−1∑
j=1
fj(s− 12n) + fj(s+
12n)
2=
1
2(Bs− 1
2n+Bs+ 1
2n).
Since fn(s) =1
2n+12Zs, this gives (1.1).
Since Zsd= N(0, 1), we have for c > 1, and n large enough,
P [|Z1| ≥ c√n] ≤ e−
c2n2
(since∫∞x
e−u2
2 du ≤ 1xe−
x2
2 , Proof: exercise).
⇒ the series∞∑
n=0
P [∃s ∈ Dnwith|Zs| ≥ c√n] ≤
∞∑
n=0
∑
s∈Dn
P [|Zs| ≥ c√n] ≤
∞∑
n=0
(2n + 1)e−c2n2
converges if c >√2 log 2. Fix c >
√2 log 2.
Apply the Borel-Cantelli lemma :
∃N0(ω) < ∞ s.t. for n ≥ N0(ω), and s ∈ Dn we have |Zs| < c√n
⇒For n ≥ N0(ω), ||fn||∞ < c√n 1
2n2
⇒For P -a.a. ω, the sequence Bmt =
∑mn=0 fn(t) converges uniformly in t ∈ [0, 1] for
m → ∞⇒ Bt := lim
m→∞Bm
t is continuous in t.
We check that the increments of B have the right finite-dimensional distributions:
Assume 0 ≤ t1 < t2 < ... < tn ≤ 1. Then, we find 0 ≤ t1,k ≤ t2,k ≤ ... ≤ tn,k ≤ 1 with
ti,k ∈ D and limk→∞
ti,k = ti and since t → Bt is continuous for P -a.a. ω, Bti+1− Bti =
limk→∞
(Bti+1,k− Bti,k) P-a.s.
Since
limk→∞
E[Bti+1,k− Bti,k ] = 0 and
limk→∞
Cov(Bti+1,k− Bti,k , Btj+1,k
−Btj,k) = limk→∞
Ii=j(ti+1,k − ti,k) = Ii=j(ti+1 − ti),
the increments Bti+1−Bti , i = 1, 2, ..., n, are independent Gaussian random variables with
mean 0 and the variance ti+1 − ti, using Lemma 1.4 (s. below)
Lemma 1.4 (Xn)n∈N sequence of Gaussian random vectors and limn→∞
Xn = X P-a.s..
If b := limn→∞
E[Xn] and C := limn→∞
Cov(Xn) exist, then X is Gaussian with mean b and
Covariance Matrix C.
Proof: See [4], Prop. 12.15. -
6
![Page 7: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/7.jpg)
Definition 1.5 A stochastic process (Bt)t≥0 on some prob. space (Ω,A, P ) is a Brownian
Motion if:
(i) B0 = 0 P-a.s.
(ii) For 0 ≤ t0 < t1 < ... < tn, the increments Bti − Bti−1are independent with law
N(0, ti − ti−1)
(iii) t → Bt(ω) is continuous for P-a.a. ω.
We obtain (Bt)t≥0 from a sequence of iid BMs (Bt)0≤t≤1 as follows:
Bt = B⌊t⌋t−⌊t⌋ +
⌊t⌋−1∑
i=0
Bi1 (t ≥ 0)
i.e. by glueing the paths (Bit)0≤t≤1 together.
Then (Bt)t≥0 is a BM. Proof: exercise
Definition 1.6 A stochastic process (Vt)t≥0 is a Gaussian process if for all t1 < t2 < ... <
tn, the vector (Vt1 , ..., Vtn) is a Gaussian random vector.
(Bt)t≥0 is a Gaussian process.
Proof: See exercises.
2 Some properties of Brownian Motion
The paths of BM are random fractals in the follows sense:
Lemma 2.1 (Scaling invariance)
Let (Bt)t≥0 be a BM and let a > 0. Then the process (Xt)t≥0 given by Xt =1aBa2t (t ≥ 0)
is also a BM.
7
![Page 8: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/8.jpg)
Proof: Independence and stationarity of the increments and continuity of the paths per-
sist under the scaling. It remains to show that Xt − Xs = 1a(Ba2t − Ba2s) has the
law N(0, t − s). But Xt − Xs is a Gaussian RV with the expectation 0 and variance1a2E[(Ba2t −Ba2s)
2] = 1a2a2(t− s) = t− s. -
Example 2.2 (1) Let a < 0 < b and consider Ta,b with
Ta,b := inft ≥ 0 : Bt ∈ a, b
the first exit time of BM from the intervall (a, b).
Then, with Xt =1|a|Ba2t,
E[Ta,b] = a2E[inft ≥ 0 : Xt ∈ −1,b
|a|] = a2E[T−1, b|a|]
In particular E[Tb,b] = b2E[T−1,1] = const · b2.
(2) Ruin probabilities:
P [(Bt)t≥0 exits (a, b) at a] = P [(Xt)t≥0 exits (−1,b
|a|) at − 1]
depends only on the ratio b|a| .
Theorem 2.3 (Time inversion)
Let (Bt)t≥0 be a BM. Then the process (Xt)t≥0 given by
Xt =
0 t = 0
tB 1t
t > 0
is again a BM.
Proof: Recall that (Bt1 , ..., Btn), 0 ≤ t1 < t2 < ... ≤ tn are Gaussian random vectors and
are therefore characterized by their expectations and their Covariances
Cov(Bti , Btj ) = ti ∧ tj (2.1)
Proof of (2.1): Let ti < tj . Then
E[BtiBtj ] = E[Bti(Btj − Bti)] + E[B2ti] = 0 + ti = ti
8
![Page 9: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/9.jpg)
(Xt)t≥0 is also a Gaussian process (check!) and the Gaussian random vectors (Xt1 , ..., Xtn)
have expectations E[Xti ] = 0, 1 ≤ i ≤ n. For t > 0, h ≥ 0, the Covariance of Xt and Xt+h
is given by
Cov(Xt, Xt+h) = Cov(tB 1t, (t+ h)B 1
t+h) = t(t + h)Cov(B 1
t, B 1
t+h) = t
Hence, the laws of all the finite vectors (Xt1 , ..., Xtn), 0 ≤ t1 < t2 < ... ≤ tn, are the same
as for BM. The paths t 7→ Xt are clearly continuous for all t > 0 (for P-a.a. ω).
For t = 0, we use the following two facts:
(1) Since Q is countable, (Xt, t ≥ 0, t ∈ Q) has the same law as (Bt, t ≥ 0, t ∈ Q)
⇒ limtց0,t∈Q
Xt = 0 P-a.s.
(2) Q ∩ (0,∞) is dense in (0,∞) and (Xt)t≥0 is continuous on (0,∞) (for P-a.a. ω) so
that 0 = limtց0,t∈Q
Xt = limt→0
Xt = 0 P-a.s.
-
Example 2.4 (Ornstein-Uhlenbeck-process)
Let (Bt)t≥0 be a BM, and set Xt = e−tBe2t , (t ∈ R).
Then Xtd= N(0, 1) ∀t
Proof: Xt is a Gaussian RV with E[Xt] = 0,Var(Xt) = e−2te2t = 1 -
Further (X−t)t∈R has the same law as (Xt)t∈R.
Proof: Set Xt = X−t, (t ∈ R) and
Bt =
tB 1
tt > 0
0 t = 0.
Then Xt = etBe−2t = etBe2te−2t = e−tBe2t .
Since (Bt)t≥0 is a BM, (Xt)t∈Rd= (Xt)t∈R. -
(Xt)t∈R is a Gaussian process with E[Xt] = 0, ∀t and
Cov(Xs, Xt) = E[Xs, Xt] = e−(s+t)E[Be2sBe2t ](2.1)= e−(s+t)e2(s∧t) = e−|t−s|.
Later we will see that ( 1√2Xt)t≥0 is a (weak) solution of the stochastic differential equation
dXt = dBt −Xtdt.
Corollary 2.5 (Law of large numbers)
(Bt)t≥0 BM. Then,
limt→∞
1
tBt = 0 P-a.s.
9
![Page 10: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/10.jpg)
Remark:
limn→∞
1
nBn = 0 P-a.s.
since Bn =∑n
i=1(Bi −Bi−1) =∑n
i=1 Yi, (Yi)i≥1 iid with law N(0, 1).
Proof: Let
Xt =
tB 1
tt > 0
0 t = 0.
Then limt→∞
1tBt = lim
t→∞X 1
t= X0 = 0 P-a.s. -
Remark 2.6 The law of the iterated logarithm says that
lim supt→∞
Bt√2t log log t
= 1 P-a.s. (2.2)
lim inft→∞
Bt√2t log log t
= −1 P-a.s. (2.3)
In particular,
lim supt→∞
Bt√t= +∞, lim inf
t→∞
Bt√t= −∞ P-a.s.
Theorem 2.7 (Paley, Wiener, Zygmund)
Let (Bt)t≥0 be a BM. Then
P [ω : t 7→ Bt(ω)is nowhere differentiable] = 1
Proof: Assume that there is t0 ∈ [0, 1] such that t 7→ Bt(ω) is diff. in t0. Then, there is a
constant M < ∞ such that
sups∈[0,1]
∣∣∣∣Bt0+s − Bt0
s
∣∣∣∣ ≤ M (2.4)
If t0 ∈[k−12n
, k2n
]for some n > 2, k ≤ 2n, then we have for 1 ≤ j ≤ n,
∣∣∣∣Bk+j
2n− Bk+j−1
2n
∣∣∣∣ ≤ M(2j + 1)1
2n(2.5)
Proof of (2.5):
∣∣∣∣Bk+j
2n− Bk+j−1
2n
∣∣∣∣ ≤∣∣∣∣Bk+j
2n−Bt0
∣∣∣∣+∣∣∣∣Bt0 −
Bk+j−1
2n
∣∣∣∣
≤ Mj + 1
2n+M
j
2n
≤ M(2j + 1)1
2n.
10
![Page 11: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/11.jpg)
Let An,k ⊆ C[0, 1] be the collection of functions satisfying (2.5) for j = 1, 2, 3.
Claim:
P [An,k] ≤ P [|B1| ≤7M√2n
]3 (2.6)
Proof of (2.6):
Bk+j
2n− Bk+j−1
2nd= N(0,
1
2n)
d=
1√2n
B1
andBk+j
2n− Bk+j−1
2n, j = 1, 2, 3 are independent. Further,
P
[|B1| ≤
7M√2n
]3≤(7M√2n
)3
.
Hence,
P
[2n⋃
k=1
An,k
]≤ 2n
(7M√2n
)3
=(7M)3√
2n
⇒∞∑
n=2
P
[2n⋃
k=1
An,k
]< ∞.
Therefore, using the Borel-Cantelli lemma,
P [(2.4) holds for some t0 ∈ [0, 1]] ≤ P
[2n⋃
k=1
An,k happens for infinitely n
]
= P
[ ∞⋂
m=2
∞⋃
n=m
2n⋃
k=1
An,k
]
= 0.
-
Corollary 2.8 For P-a.a. ω, the function t 7→ Bt(ω) is not of bounded variation on any
interval.
Recall that for g which is right-continuous on [a, b], we set
Vg[a, b] = sup
m∑
k=1
|g(tk)− g(tk−1)| : a ≤ t0 < t1 < ... < tm ≤ b,m ∈ N
.
Vg[a, b] is the variation of g on [a, b]. We say that g is of bounded variation (BV) on [a, b]
if Vg[a, b] < ∞.
Example:
If t 7→ g(t) is increasing on [a, b], g is BV on [a, b], and Vg[a, b] = g(b)− g(a).
Lemma 2.9 Assume g is BV on [a, b] and right-continuous ⇒ ∃g1, g2 : [a, b] → R, g1, g2
increasing and right-continuous such that
g = g1 − g2 (2.7)
11
![Page 12: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/12.jpg)
Proof: See literature.
Proof of Corollary 2.8: A theorem by Lebesgue says that an increasing function g : [a, b] →R is differentiable for λ-a.a. s ∈ [a, b]. -
Remark 2.10 Let g be right-continuous and increasing on [a, b]. Then, g defines a
measure νg on [a, b] by
νg((a1, b1]) = g(b1)− g(a1), a ≤ a1 < b1 ≤ b.
If g is BV on [a, b], and f ∈ C[a, b] we can define
∫ b
a
f(s)dg(s) :=
∫ b
a
f(s)νg1(ds)−∫ b
a
f(s)νg2(ds)
(with g1, g2 from (2.7)). For a BM (Bt)t≥0, this procedure can not be applied - nevertheless,
we will be able to define∫ b
af(s)dBs.
Remark 2.11 A continuous function g which is BV on [0,1] has quadratic variation
0, i.e. for En ⊆ [0, 1], En = 0, t1, ..., tn, 1, (0 ≤ t1 < ... < tn ≤ 1) we have
∑
ti∈En
(g(ti+1)− g(ti))2 ≤ max
ti∈En
|g(ti+1)− g(ti)| ·∑
ti∈En
|g(ti+1)− g(ti)| −→n→∞
0
if s(En) −→n→∞
0, where s(En) := supti∈En|ti+1 − ti|. is the mesh of En.
Theorem 2.12 (Bt)t≥0 BM. Let (En) be a sequence of partitions with s(En) −→n→∞
0, En ⊆En+1 ⊆ En+2 ⊆ ... Then, ∀t > 0,
V nt :=
∑
ti∈En,
ti+1≤t
(Bti+1− Bti)
2 −→n→∞
t P-a.s. and in L2.
Proof:
(1) Convergence in L2:
E[V nt ] =
∑
ti∈En,
ti+1≤t
(ti+1 − ti) −→n→∞
t.
Using the independence of the increments,
Var(V nt ) =
∑
ti∈En,
ti+1≤t
Var((Bti+1− Bti)
2) =(∗)
∑
ti∈En,
ti+1≤t
2(ti+1 − ti)2
⇒ Var(V nt ) −→
n→∞0, see Remark 2.11
Proof of (*): Yd= N(0, σ2)
⇒ Var(Y 2) = E[Y 4]− E[Y 2]2 = 3σ4 − σ4 = 2σ4.
12
![Page 13: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/13.jpg)
(2) We first show P-a.s. convergence along dyadic partitions En = 0, 12n, 22n, 32n, ..., 1.
For these partitions,
Var(V nt ) = 2 · 2n
(1
2n
)2
=2
2n
⇒∑∞
n=1Var(Vnt ) < ∞.
Using Lemma 2.13 below we conclude that V nt −→
n→∞t P-a.s..
(3) For general sequence of partitions, use (2) and an approximation argument. -
Lemma 2.13 Y1, Y2, ... RVs with∑∞
n=1Var(Yn) < ∞. Then, (Yn −E[Yn]) −→n→∞
0 P-a.s..
Proof: See exercises.
3 The Cameron-Martin Theorem and the Paley-Wiener
stochastic integral
We know several facts about paths of BM, for instance:
P [∃t0 ∈ [0, 1] s.t. t → Bt differentiable in t0] = 0
Do these properties remain true for (Bt + ct)0≤t≤1 or, more generally, for (Bt + h(t))0≤t≤1
where h ∈ C[0, 1]?
We denote by
H =
h ∈ C[0, 1] : there is f ∈ L2[0, 1] s.t. h(t) =
∫ t
0
f(s)ds, 0 ≤ t ≤ 1
the Cameron-Martin space (or Dirichlet space).
Given h ∈ H , f is uniquely determined as an element of L2[0, 1] and we write
h′ = f
Example:
h′ = f is true λ-a.s. (λ = Lebesgue measure on [0, 1]).
13
![Page 14: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/14.jpg)
Recall that for two measures µ, ν on (Ω,A) we write µ ⊥ ν and say ”µ and ν are singular”
if there is A ∈ A with µ(A) = 0, ν(Ac) = 0.
We write ν ≪ µ and say ”ν is absolutely continuous w.r.t. µ” if, ∀A ∈ A, µ(A) = 0 ⇒ν(A) = 0.
We write ν ≈ µ and say ”ν and µ are equivalent” if ν ≪ µ and µ ≪ ν.
Let µ be Wiener measure on (C[0, 1],F) and µh be the law of (Bt + h(t))0≤t≤1.
Theorem 3.1 Assume h ∈ C[0, 1] and h(0) = 0.
(i) If h /∈ H then µh ⊥ µ.
(ii) If h ∈ H then µh ≈ µ.
For the proof, we will need the following quantity:
Qn(h) := 2n2n∑
j=1
(h
(j
2n
)− h
(j − 1
2n
))2
(n = 1, 2, ...) (3.1)
Lemma 3.2 Qn(h), n = 1, 2, ... is an increasing sequence and
h ∈ H ⇔ supn
Qn(h) < ∞.
Moreover, if h ∈ H, then
Qn(h) −→n→∞
∫ 1
0
h′(s)2ds =
∫ 1
0
f(s)2ds.
Proof: The general inequality (a+ b)2 ≤ 2a2 + 2b2 gives
[h
(j
2n
)− h
(j − 1
2n
)]2≤ 2
[h
(2j − 1
2n+1
)− h
(j − 1
2n
)]2+ 2
[h
(j
2n
)− h
(2j − 1
2n+1
)]2
Summing this inequality over j ∈ 1, 2, ..., n gives Qn(h) ≤ Qn+1(h) ⇒ Qn(h) is increas-
ing in n. For h ∈ H with h′ = f , we have, using Jensen’s inequality
Qn(h) = 2n2n∑
j=1
(∫ j
2n
j−12n
f(s)ds
)2
≤2n∑
j=1
∫ j
2n
j−12n
f(s)2ds =
∫ 1
0
f(s)2ds
Hence, h ∈ H ⇒ supnQn(h) < ∞.
Proof of ”⇒”: Let t ∼ U([0, 1]), then there exists a sequence of intervals In(t) = [an, bn] =[kn−12n
, kn2n
]s.t. t ∈ In(t), ∀n. Given I1(t), ...In(t), the interval In+1(t) is, with probability
12the left or the right half of In(t). Let Mn = Mn(t) = 2n(h(bn))− h(an)), then (Mn)n≥1
is a martingale w.r.t. σ(In(t) (on ([0, 1], B[0,1], λ)). Furthermore:
E[M2n ] = 22n
2n∑
k=1
(h
(k
2n
)− h
(k − 1
2n
))2
· 1
2n= Qn(h)
If supn Qn(h) < ∞, (Mn)n≥1 is a martingale which is bounded in L2, i.e. supn E[M2n] < ∞.
We prove later:
14
![Page 15: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/15.jpg)
Lemma 3.3 Assume (Mn)n≥1 is a martingale on (Ω,A, P ) and (Mn)n≥1 is bounded in
L2. Then there exists a RV X s.t.
Mn −→n→∞
X P-a.s. and in L2
By Lemma 3.3, Mn → X a.s. and in L2. Let
g(s) :=
∫ s
0
X(t)dt
For j,m fixed, we have ∀n:
h
(j
2m
)=
∫ j
2m
0
Mn(t)dt −→n→∞
∫ j
2m
0
X(t)dt
⇒ h(
j2m
)= g
(j2m
)∀j,m and by continuity, g(s) = h(s) ∀s ∈ [0, 1] and h′(s) =
X(s) λ-a.s.
⇒ h ∈ H and
Qn(h) = E[M2n ] −→
n→∞E[X2] =
∫ 1
0
(h′(t))2dt.
-
Proof of Lemma 3.3: Mn is bounded in L1 and by the martingale convergence theorem
(Theorem 14.13 in Probability Theory lecture notes) Mn −→n→∞
X P-a.s. and X ∈ L1. We
have for m > n:
E[(Mm −Mn)2] = E[M2
m]− E[M2n] (3.2)
since
E[MmMn] = E[E[MmMn|An]] = E[MnE[Mm|An]] = E[M2n ]
Fatou’s Lemma implies from (3.2)
E[(X −Mn)2] ≤ E
[lim infm→∞
(Mm −Mn)2]≤ lim inf
m→∞E[M2
m]− E[M2n]
The last expression tends to 0 a.s. for n → ∞, since E[M2n ] is increasing in n:
E[M2n]
(3.2)=
n∑
k=2
E[(Mk −Mk−1)2] + E[M2
1 ].
-
Lemma 3.4 (The Paley-Wiener stochastic integral)
Let (Bt)t≥0 be a BM and h ∈ H. Then
ξn := 2n∑
j=1
2n(h
(j
2n
)− h
(j − 1
2n
))(B j
2n− B j−1
2n)
converges a.s. and in L2. The limit is denoted by∫ 1
0h′dB.
15
![Page 16: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/16.jpg)
Proof: Recall from the construction of BM that
B 2j−12n
=1
2
(B 2j−2
2n+B 2j
2n
)+ σnZ 2j−1
2n
where σn = 2−(n+1)/2 and Zt are iid standard normal.
Therefore:
ξn − ξn−1 = 2nσn
2n−1∑
j=1
(2h
(2j − 1
2n
)− h
(2j − 2
2n
)− h
(2j
2n
))Z 2j−1
2n
which implies that (ξn)n≥1 is a martingale.
(E[ξn|An−1] = E[ξn − ξn−1|An−1] + E[ξn−1|An−1] = E[ξn−1|An−1])
E[ξ2n] = 22n2n∑
j=1
(h
(j
2n
)− h
(j − 1
2n
))2
E
[(B j
2n− B j−1
2n
)2]= Qn(h).
Hence, for h ∈ H , the convergence follows from Lemma 3.3. -
Proof of the Cameron-Martin Theorem: Let µn and µh,n denote the finite-dimensional
distributions on the set Dn. Then the Radon-Nikodym derivativedµh,n
dµnis the ratio of the
two Lebesgue densities. For x ∈ C[0, 1] and ∆jx = ∆(n)j x = x
(j2n
)− x
(j−12n
),
dµh,n
dµn(x) =
2n∏
j=1
exp
(−(∆jx−∆jh)
2
21−n
)exp
((∆jx)
2
21−n
)= exp(−Hn(x))
with Hn(x) = 2n−1∑n
j=1((∆jh)2 − 2∆jx∆jh).
By Theorem 14.5 in the Probability Theory lecture notes, exp(−Hn(x)) is a martingale
under µ, since it is non-negative it converges a.s. to a finite RV X . We show later:
µh(A) =
∫
A
Xdµ+ µh(A ∩ X = ∞)
for A ∈ F . This implies:
µ(X = 0) = 1 ⇒ µ ⊥ µh
µ(X > 0) = 1 ⇒ µ ≪ µh
We have Eµ[Hn] =∫Hn(x)µ(dx) =
12Qn(h) and Varµ(Hn) = 22n−2·4·
∑2n
j=1(∆jh)2Varµ(∆jx) =
Qn(h). By Chebyshev’s inequality, we get
Pµ
(Hn ≤ 1
4Qn(h)
)= Pµ
(1
2Qn(h)−Hn ≥ 1
4Qn(h)
)≤ Qn(h)(
14Qn(h)
)2 =16
Qn(h).
Now, if h /∈ H , then by Lemma 3.2, Hn → ∞ and x = 0 µ-a.s..
For the converse, suppose h ∈ H . By Lemma 3.4 (and the second part of Lemma 3.2),
H(x) −→n→∞
1
2||h′||22 −
∫ 1
0
h′dB.
Therefore x > 0 µ-a.s. and µ ≪ µh. Finally note that µh ≪ µ ⇔ µ ≪ µ−h. -
16
![Page 17: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/17.jpg)
4 Brownian Motion as a continuous martingale
Definition 4.1 Consider a probability space (Ω,F , P ).
(i) A filtration is a sequence of σ-fields (Ft)t≥0 with Fs ⊆ Ft ⊆ F , ∀s < t.
(ii) A stochastic process (Xt)t≥0 on (Ω,F , P ) is adapted to (Ft)t≥0 ifXt is Ft-measurable,
∀t ≥ 0.
Suppose (Xt)t≥0 is a stochastic process on (Ω,F , P ). Then we can define a filtration
(Ft)t≥0 by taking Ft := σ(Xs, 0 ≤ s ≤ t), i.e. Ft is the σ-field generated by Xs, 0 ≤s ≤ t. Then, (Xt)t≥0 is adapted to (Ft)t≥0.
Definition 4.2 A real-valued stochastic process (Xt)t≥0 is a martingale with respect to
a filtration (Ft)t≥0 if it is adapted to (Ft)t≥0 and
E[|Xt|] < ∞ ∀t ≥ 0 (4.1)
and, for 0 ≤ s ≤ t
E[Xt|Fs] = Xs P-a.s. (4.2)
The process (Xt)t≥0 is a submartingale with respect to a filtration (Ft)t≥0 if it is adapted
to (Ft)t≥0, (4.1) holds and for 0 ≤ s ≤ t
E[Xt|Fs] ≥ Xs P-a.s. (4.3)
and it is a supermartingale with respect to a filtration (Ft)t≥0 if it is adapted to (Ft)t≥0,
(4.1) holds and, for 0 ≤ s ≤ t
E[Xt|Fs] ≤ Xs P-a.s. (4.4)
Remark 4.3 If (Xt)t≥0 is a martingale with respect to (Ft)t≥0, |Xt| is in general not a
martingale but a submartingale. More generally, if (Xt)t≥0 is a martingale with respect
to (Ft)t≥0 and f : R → R is a convex function such that E[|f(Xt)|] < ∞, ∀t ≥ 0, then
f(Xt)t≥0 is a submartingale with respect to (Ft)t≥0.
Proof:
E[f(Xt)|Fs]Jensen≥ f(E[Xt|Fs]) = f(Xs) P-a.s.
hence (4.2) holds. -
Remark 4.4 Let (Bt)t≥0 be a BM and Ft = σ(Xs, 0 ≤ s ≤ t). Then (Bt)t≥0 is a
martingale with respect to (Ft)t≥0.
Proof:
E[Bt|Fs] = E[Bt − Bs|Fs] + E[Bs|Fs] = 0 +Bs P-a.s.
(See exercise 3.3 (ii): Bt − Bs is independent of Fs, hence E[Bt − Bs|Fs] = 0.) -
17
![Page 18: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/18.jpg)
Definition 4.5 Let (Ω,F , P ) be a probability space with a filtration (Ft)t≥0. A RV T
with values in [0,∞] is a stopping time with respect to (Ft)t≥0 if
T ≤ t ∈ Ft, ∀t ≥ 0.
Example 4.6 Let (Bt)t≥0 be a BM and Ft = σ(Xs, 0 ≤ s ≤ t), t ≥ 0.
(i) Let y 6= 0 and T = inft ≥ 0 : Bt = y. Then, T is a stopping time with respect to
(Ft)t≥0.
Proof:
T ≤ t =
∞⋂
n=1
⋃
s:s∈Q∩(0,t)Bs ∈ U(y,
1
n) ∈ Ft, where
U(y,1
n) := z ∈ R : |z − y| < 1
n
(ii) Let I = (a, b), 0 < a < b and T = inft ≥ 0 : Bt ∈ I. Then, T is not a stopping
time because T ≤ t /∈ Ft.
Proof: See [4].
Definition 4.7 Assume T is a stopping time with respect to the filtration (Ft)t≥0. Define
FT := A ∈ F : A ∩ T ≤ t ∈ Ft, ∀t ≥ 0.
FT is called the σ-field of events observable until time T .
A martingale (Xt)t≥0 is a continuous martingale if P [t → Xt(ω) is continuous] = 1.
Theorem 4.8 (Optional stopping)
Suppose (Xt)t≥0 is a continuous martingale and 0 ≤ S ≤ T stopping times. If the process
(Xt∧T )t≥0 is dominated by an integrable RV X, i.e. |Xt∧T | ≤ X, ∀t ≥ 0 a.s. and E[|X|] <∞, then E[XT |FS] = XS P-a.s.
Proof: This can be derived from the result for martingales in discrete time (see Theorem
14.9 in the Probability Theory lecture notes for the case S = 0). See [4] for details.
Theorem 4.9 (Wald’s Lemma for BM)
Let (Bt)t≥0 be a BM and T a stopping time such that either
(a) E[T ] < ∞ or
(b) (Bt∧T )t≥0 is dominated by an integrable RV.
Then, we have E[BT ] = 0.
Remark 4.10 One does need a condition on T , as the following example shows:
Let T = inft : Bt = 1. (Then T < ∞ P-a.s., see exercise 5.2). Clearly, E[BT ] = 1. We
conclude from Theorem 4.9 that E[T ] = ∞ and that (Bt∧T )t≥0 is not dominated by an
integrable RV.
18
![Page 19: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/19.jpg)
Proof of Theorem 4.9: We show that (a) implies (b). Suppose E[T ] < ∞, and define
Mk = max0≤t≤1 |Bt+k − Bk| and M =∑⌊T ⌋
k=1Mk. Then
E[M ] = E[
⌊T ⌋∑
k=1
Mk]
=∞∑
k=1
E[IT>k−1Mk]
=
∞∑
k=1
P [T > k − 1]E[Mk]
≤ E[M0]E[T + 1]
But E[M0] = E[max0≤t≤1 |Bt|] < ∞ (Proof: exercise).
If (b) is satisfied, we can apply the optional stopping theorem (Theorem 4.8) with S = 0,
giving E[XT |F0] = X0, P-a.s. which yields that E[BT ] = 0. -
5 Stochastic integrals with respect to Brownian Motion
Let (Bt)t≥0 be a BM on some probability space (Ω,F , P ) and Ft the completion of
σ(Bs, s ≤ t) (see Theorem 1.32 in the Probability Theory lecture notes). Then, (Bt)t≥0
is adapted to (Ft)t≥0 .
Definition 5.1 A process Xt(ω) : t ≥ 0, ω ∈ Ω is progressively measurable if for
each t ≥ 0 the mapping X : [0, t] × Ω → R is measurable with respect to the σ-field
B[0,t] ⊗Ft.
Lemma 5.2 Any process (Xt)t≥0 which is adapted and either right-continuous or left-
continuous is also progressively measurable.
Proof: Assume (Xt)t≥0 is right-continuous. Fix t > 0. For n ∈ N, 0 ≤ s ≤ t define
X(n)0 (ω) = X0(ω)
and
X(n)s (ω) = X (k+1)t
2n(ω) for
kt
2n< s ≤ (k + 1)t
2nk = 0, 1, 2, ..., 2n − 1.
The mapping (s, ω) → X(n)s (ω) is B[0,t] ⊗Ft-measurable.
By right-continuity, we have limn→∞
X(n)s (ω) = Xs(ω) for all s ∈ [0, t] and ω ∈ Ω, hence the
limit mapping (s, ω) → Xs(ω) is also B[0,t] ⊗ Ft-measurable.
The left-continuous case is analogous.
-
19
![Page 20: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/20.jpg)
We construct the integral by starting with ”simple” progressively measurable processes
Ht(ω) and then proceeding to more complicated ones. Consider first step processes
Ht(ω) : t ≥ 0, ω ∈ Ω of the form
Ht(ω) =k∑
j=1
Aj(ω)I(tj ,tj+1](t) for 0 ≤ t1 < ... < tk+1
where Aj is Ftj -measurable, 1 ≤ j ≤ k.
We define the integral as
∫ ∞
0
HsdBs :=
k∑
j=1
Aj(Btj+1− Btj )
Now letH be a progressively measurable process satisfying E[∫∞
0H2
sds]< ∞. Suppose H
can be approximated by a sequence of progressively measurable step processes H(n), n ≥ 1,
then we define ∫ ∞
0
HsdBs := limn→∞
∫ ∞
0
H(n)s dBs . (5.1)
More precisely, let ||H||22 := E[∫∞
0H2
sds]. We will show that
(1) Every progressively measurable H satisfying E[∫∞
0H2
sds]< ∞ can be approximated
in the || · ||2 - norm by progressively measurable step processes.
(2) For each approximating sequence, the limit in (5.1) exists in the L2-sense.
(3) This limit does not depend on the approximating sequence of step processes.
We start with (1):
Lemma 5.3 For every progressively measurable process Hs(ω) : s ≥ 0, ω ∈ Ω satisfying
E[∫∞
0H2
sds]< ∞ there exists a sequence (H(n))n∈N of progressively measurable step
processes such that limn→∞
||H(n) −H||2 = 0.
Proof: We approximate the progressively measurable process successively by
• a bounded progressively measurable process
• a bounded, almost surely continuous progressively measurable process
• a progressively measurable step process.
Let H = Hs(ω), s ≥ 0, ω ∈ Ω be a progressively measurable process with ||H||2 < ∞.
First define
H(n)s (ω) =
Hs(ω) s ≤ n
0 otherwise.
20
![Page 21: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/21.jpg)
Clearly, limn→∞
||H(n) −H||2 = 0.
Second, approximate any progressively measurable process H on a finite interval by trun-
cating its values, i.e. define H(n) by
H(n)s = (Hs(ω) ∧ n) ∨ (−n).
Clearly, H(n) is progressively measurable and ||H(n) −H||2 → 0.
Now we approximate any uniformly bounded progressively measurable H by bounded,
almost surely continuous progressively measurable processes.
Let h = 1nand
H(n)s (ω) =
1
h
∫ s
s−h
Ht(ω)dt
(where we set Hs(ω) = H0(ω) for s < 0). H(n) is again progressively measurable (since
we only take averages over the past). H(n) is almost surely continuous. Further, ∀ω ∈ Ω
and almost every (with respect to Lebesgue measure) s ∈ [0, t],
limh→0
1
h
∫ s
s−h
Ht(ω)dt = Hs(ω).
Since H is uniformly bounded, we obtain that limn→∞
||H(n) −H||2 = 0.
Finally, a bounded, amost surely continuous, progressively measurable process H can be
approximated by a sequence of progressively measurable step processes H(n) by taking
H(n)s = H
(j
n, ω
)for
j
n≤ s ≤ j + 1
n.
The process H(n) are again progressively measurable and one easily sees that
limn→∞
||H(n) −H||2 = 0.
This completes the proof of Lemma 5.3. -
Lemma 5.4 Let H be a progressively measurable step process and E[∫∞
0H2
sds]< ∞ .
Then
E
[(∫ ∞
0
HsdBs
)2]= E
[∫ ∞
0
H2sds
]
Proof: Let H =∑k
i=1AiI(ai,ai+1] be a progressively measurable step process. Then
E
[(∫ ∞
0
HsdBs
)2]= E
[k∑
i,j=1
AiAj(Bai+1−Bai)(Baj+1
−Baj )
]
=
k∑
i=1
E[A2i (Bai+1
− Bai)2] + 2
k∑
i=1
k∑
j=i+1
E[AiAj(Bai+1−Bai) ·E[(Baj+1
− Baj )|Faj ]]
=
k∑
i=1
E[A2i ]E[(Bai+1
−Bai)2] = E
[∫ ∞
0
H2sds
]
-
21
![Page 22: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/22.jpg)
Corollary 5.5 Suppose (H(n))n∈N is a sequence of progressively measurable step processes
such that
E[
∫ ∞
0
(H(n)s −H(m)
s )2ds] −→n,m→∞
0.
Then
E[(
∫ ∞
0
(H(n)s −H(m)
s )dBs)2] −→
n,m→∞0.
Proof: Because the difference of two progressively measurable step processes is again a
progressively measurable step process, Lemma 5.4 can be applied to H(n) − H(m) and
yields the claim. -
We showed (1). The following theorem addresses (2) and (3).
Theorem 5.6 Suppose (H(n))n∈N is a sequence of progressively measurable step processes
and H a progressively measurable process such that
limn→∞
E[
∫ ∞
0
(H(n)s −Hs)
2ds] = 0
Then
limn→∞
∫ ∞
0
H(n)s dBs =:
∫ ∞
0
HsdBs
exists as a limit in the L2-sense and does not depend on the choice of (H(n))n∈N. Moreover,
we have
E[
∫ ∞
0
(H(n)s −H(m)
s )2dBs] −→n,m→∞
0 (5.2)
Proof: By the triangle inequality (H(n))n∈N satisfies the assumptions of Corollary 5.5 and
hence(∫∞
0H
(n)s dBs
)n∈N
is a Cauchy sequence in L2. Since L2 is complete, the limit exists
and does not depend on the choice of the approximating sequence. Finally, (5.2) follows
from Lemma 5.4, applied to H(n), taking the limit as n → ∞. -
This completes the construction of the stochastic integral∫∞0
H2sdBs for progressively
measurable processes H with E[∫∞
0H2
sds]< ∞.
Remark 5.7 If the sequence of step processes in Theorem 5.6 is chosen such that
∞∑
n=1
E
[∫ ∞
0
(H(n)s −Hs)
2ds
]< ∞,
then by (5.2) we get∞∑
n=1
E
[(∫ ∞
0
(H(n)s −Hs)dBs
)2]< ∞
and therefore, almost surely,
∞∑
n=1
(∫ ∞
0
H(n)s Bs −
∫ ∞
0
HsdBs
)2
< ∞.
22
![Page 23: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/23.jpg)
This implies that, almost surely,
limn→∞
∫ ∞
0
H(n)s dBs =
∫ ∞
0
HsdBs.
We now want to describe the stochastic integral as a process in time. We will see that it
will be a continuous martingale.
Definition 5.8 Suppose H = Hs(ω) : s ≥ 0, ω ∈ Ω is progressively measurable with
E[∫∞
0H2
sds]< ∞. Define the progressively measurable process H t
s(ω), s ≥ 0, ω ∈ Ω
H ts(ω) := Hs(ω)Is≤t.
Then, the stochastic integral of H up to time t is defined∫ t
0
HsdBs :=
∫ ∞
0
H tsdBs.
Remark 5.9 We have seen already in Lemma 3.4 that for g ∈ L2[0, 1] we can define∫ 1
0g(s)dBs. Provided that both integrals exist, the Paley-Wiener integral from Lemma
3.4 agrees with the stochastic integral just defined.
Proof: See exercises.
Definition 5.10 A stochastic process (Xt)t≥0 is a modification of a stochastic process
(Yt)t≥0 if, for every t ≥ 0, we have P [Xt = Yt] = 1.
Theorem 5.11 Assume that (Hs(ω))s≥0 is progressively measurable and E[∫ t
0Hs(ω)
2ds]<
∞, ∀t ≥ 0. Then there exists a modification (Mt)t≥0 of(∫ t
0HsdBs
)t≥0
such that
P [t 7→ Mt(ω) is continuous] = 1.
Further, (Mt)t≥0 is a martingale and hence
E
[∫ t
0
HsdBs
]= 0 ∀t ≥ 0.
Proof: Fix t0 ∈ N and let H(n) be a sequence of progressively measurable step processes
such that
||H(n) −H t0 ||2 −→n→∞
0
⇒ E
[(∫ ∞
0
(H(n)s −H t0
s )dBs
)2]−→n→∞
0.
For s ≤ t the random variable∫ s
0H
(n)u dBu is Fs-measurable and E[
∫ t
sH
(n)u dBu|Fs] = 0
(proof: exercise!) which implies that the process(∫ t
0H
(n)u dBu
)0≤t≤t0
is a martingale, ∀n.By Doob’s maximal inequality, see below, for p = 2,
E
[sup
0≤t≤t0
(∫ t
0
H(n)s dBs −
∫ t
0
H(m)s dBs
)2]≤ 4E
[(∫ t0
0
(H(n)s −H(m)
s )dBs
)2]−→n→∞
0.
23
![Page 24: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/24.jpg)
This implies that M(n)t :=
∫ t
0H
(n)s dBs, 0 ≤ t ≤ t0, n = 0, 1, 2, . . . defines a Cauchy
sequence in the space of continuous functions on [0, t0] (equipped with the supremum
norm). We denote the limit of this Cauchy sequence by (Mt)0≤t≤t0 . Hence, the process
(Mt)0≤t≤t0 is almost surely a uniform limit of continuous processes and therefore almost
surely continuous. Due to Theorem 5.6,
P
[Mt =
∫ t
0
HsdBs
]= 1.
For fixed t ∈ [0, t0], the random variable∫ t
0HsdBs is the limit (in L2) of
∫ t
0H
(n)s dBs, hence
it is Ft-measurable, and∫ t0t
HsdBs has conditional expectation E[∫ t0t
HsdBs|Ft] = 0.
Therefore,∫ t
0HsdBs is a conditional expectation of Mt0 , given Ft, i.e.
Mt = E
[∫ t0
0
HsdBs|Ft
].
Therefore, (Mt)0≤t≤t0 is a martingale, as a process of successive predictions, (see (14.4) in
Probability Theory lecture notes). -
Doob’s maximal inequality
Suppose (Xt)t≥0 is a continuous martingale and p > 1. Then, for any t ≥ 0
E[ sup0≤s≤t
|Xs|p] ≤(
p
p− 1
)p
E[|Xt|p].
Proof: See literature.
6 Ito’s formula and examples
Let f ∈ C1(R) (f continuously differentiable) and x : [0,∞) → R x continuous and BV
on [0, t]. Then
f(x(t))− f(x(0)) =
∫ t
0
f ′(x(s))dx(s)
Example:
x(s) = s, ∀s ≥ 0. Then f(t)− f(0) =∫ t
0f ′(s)ds. Ito’s formula gives an analogue for the
case when x is replaced by a BM Bt. The crucial difference is that the second derivative
of f is needed.
Theorem 6.1 (Ito’s formula I)
Let f : R → R be twice continuously differentiable such that E[∫ t
0(f ′(Bs))
2ds] < ∞ for
some t > 0, where (Bt)t≥0 is a BM. Then, almost surely for all s ∈ [0, t]
f(Bs)− f(B0) =
∫ s
0
f ′(Bu)dBu +1
2
∫ s
0
f ′′(Bu)du.
24
![Page 25: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/25.jpg)
Example 6.2 Let f(x) = x2. We have E[∫ s
0B2
udu] =∫ s
0udu < ∞, ∀s > 0. Hence
B2s = 2
∫ s
0
BudBu + s ⇒ B2s − s = 2
∫ s
0
BudBu.
We conclude from Theorem 5.11 that (B2s − s)s≥0 is a martingale.
Example 6.3 Let f(x) = x3. We have E[∫ s
0B4
udu] =∫ s
0E[B4
u]du < ∞, ∀s > 0. Hence
B3s = 3
∫ s
0
B2udBu + 3
∫ s
0
Budu .
Let
Ms = B3s − 3
∫ s
0
Budu, s ≥ 0.
We conclude from Theorem 5.11 that (Ms)s≥0 is a martingale.
To prove Theorem. 6.1, we will need the following.
Theorem 6.4 Let f : R → R is continuous, t > 0, and 0 = t(n)1 < ... < t
(n)n = t are
partitions such that their mesh
max1≤i≤n−1
|t(n)i+1 − t(n)i | → 0.
Thenn−1∑
j=1
f(Bt(n)j
)(Bt(n)j+1
−Bt(n)j
)2 →∫ t
0
f(Bs)ds
Proof: For f ≡ 1, see Theorem 2.12. For general f , see Theorem 7.12 in Morter/Peres.
Proof of Theorem 6.1: We write w(δ,M) for the modulus of continuity of f ′′ on [−M,M ]:
w(δ,M) = sups,t∈[−M,M ],|s−t|<δ
|f ′′(s)− f ′′(t)|
Using Taylor’s formula, for any x, y ∈ [−M,M ], x < y with |x− y| < δ,
|f(y)− f(x)− f ′(x)(y − x) +1
2f ′′(x)(y − x)2| ≤ w(δ,M)(y − x)2.
Take a sequence 0 = t(n)1 < ... < t
(n)n = t. We write 0 = t1 < ... < tn = t for simplicity.
With
δB := max1≤i≤n−1
|Bti+1−Bti |
and
MB := max0≤s≤t
|Bs|,
we get
|n−1∑
i=1
(f(Bti+1)− f(Bti))−
n∑
i=1
f ′(Bti)(Bti+1−Bti)−
n−1∑
i=1
1
2f ′′(Bti)(Bti+1 −Bti)
2|
25
![Page 26: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/26.jpg)
≤ w(δB,MB)
n−1∑
i=1
(Bti+1−Bti)
2
Now,∑n−1
i=1 f(Bti+1)− f(Bti) = f(Bt)− f(B0), and there is a sequence of partitions with
mesh going to 0 s.t.
n−1∑
i=1
f ′(Bti)(Bti+1− Bti) →
∫ t
0
f ′(Bs)dBs P − a.s.
n−1∑
i=1
f ′′(Bti)(Bti+1− Bti)
2 →∫ t
0
f ′′(Bs)ds P − a.s.
n−1∑
i=1
(Bti+1− Bti)
2 → t P − a.s.
By continuity of the Brownian path, w(δB,MB) converges almost surely to 0.
This proves Ito’s formula for fixed t, or indeed almost surely for all s ∈ Q∩ [0, t]. Since all
the terms in Ito’s formula are continuous almost surely, we get the result simultaneously
for all s ∈ [0, t]. -
We next state Ito’s formula for functions f which can depend also on time.
Theorem 6.5 (Ito’s formula II)
Let f : R×R → R, (x, t) 7→ f(x, t) be twice continuously differentiable in the x-coordinate
and once continuously differentiable in the t-coordinate. Assume that
E[∫ t
0(∂xf(Bs, s))
2ds] < ∞ for some t > 0. Then, almost surely for all s ∈ [0, t]:
f(Bs, s)− f(B0, 0) =
∫ s
0
∂xf(Bu, u)dBu +
∫ s
0
dtf(Bu, u)du+1
2
∫ s
0
∂xxf(Bu, u)du
Proof: See Morter/Peres.
Example 6.6 Fix α > 0.
f(Bt, t) = eαBt− 12α2t = Mt (M0 = 1)
Then
f(Bs, s)− 1 =
∫ s
0
αMudBu +
∫ s
0
(−1
2α2)Mudu+
1
2
∫ s
0
α2Mudu
⇒ Ms −M0 =
∫ s
0
αMudBu.
We conclude from Theorem 5.11 that (Mt)t≥0 is a martingale (details: exercise). M solves
the stochastic differential equation
”dMs = αMsdBs, M0 = 1”
which can be written in integral form
Ms = 1 +
∫ s
0
MudBu, s ≥ 0.
26
![Page 27: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/27.jpg)
Definition 6.7 (Bt)t≥0 = (B(1)t , ..., B
(d)t )t≥0 is a d-dim. BM if (B
(1)t )t≥0, ..., (B
(d)t )t≥0 are
iid one-dimensional BMs. For Hs = (H(1)s , ..., H
(d)s ) we write
∫ t
0
HsdBs =d∑
i=1
∫ t
0
H(i)s dB(i)
s .
Theorem 6.8 (Multidimensional Ito formula)
Let (Bt)t≥0 be a d-dim. BM and f : Rd+1 → R be such that the partial derivatives ∂if
and ∂jkf exist for all 1 ≤ i ≤ d+ 1, 1 ≤ j, k ≤ d and are continuous. If for some t > 0
E
[∫ t
0
|∇xf(Bs, s)|2ds]< ∞
where
∇xf = (∂1f, ..., ∂df)
then, almost surely, for all 0 ≤ s ≤ t
f(Bs, s)− f(B0, 0) =
∫ s
0
∇xf(Bu, u)dBu +
∫ s
0
∂d+1f(Bu, u)du+1
2
∫ s
0
xf(Bu, u)du
where xf =∑d
j=1 ∂jjf .
7 Pathwise stochastic integration with respect to
continuous semimartingales
We saw that for a continuous process H with E[∫ t
0H2
sds] < ∞,∫ t
0HsdBs can be defined
as an almost sure limit, see Remark 5.7.
Can we replace (Bt)t≥0 with another continuous martingale?
Definition 7.1 Let En = 0 = t(n)1 < ... < t
(n)n be a sequence of partitions with
s(En) = sup |t(n)i+1 − t(n)i | → 0.
Then, the function Xt has continuous quadratic variation along the sequence En if
〈X〉t = limn→∞
∑
ti∈En
ti+1≤t
(Xtj+1−Xtj )
2 (7.1)
exists P-a.s.
Remark 7.2 〈X〉t is increasing and continuous, hence it defines a measure ν on (R,B)given by ν((a, b]) = 〈X〉b − 〈X〉a.In particular, if f : R → R is continuous,
∫ t
0f(s)d〈X〉s is well-defined.
In analogy to Theorem 6.4, we have
27
![Page 28: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/28.jpg)
Theorem 7.3 Assume that Xt has continuous quadratic variation 〈X〉t and f : R → R
is continuous. Then, for t > 0,
∑
ti∈En
ti+1≤t
f(Xti)(Xtj+1−Xtj )
2 →∫ t
0
f(Xs)d〈X〉s
Proof: Clear for f ≡ 1 due to (7.1). Rest see literature.
Note that the assumption “Xt has continuous quadratic variation” can only be satisfied
for continuous processes.
Theorem 7.4 (Ito’s formula for (deterministic) functions with continuous quadratic
variation)
Assume that the function Xt has continuous quadratic variation 〈X〉t and let f : R → R
be twice continuous differentiable. Then,
f(Xt)− f(X0) =
∫ t
0
f ′(Xu)dXu +1
2
∫ t
0
f ′′(Xu)d〈X〉u (7.2)
where ∫ t
0
f ′(Xu)dXu = limn→∞
∑
ti∈En
ti+1≤t
f ′(Xti)(Xti+1−Xti) .
Sketch of proof: As in the proof of Theorem 6.1, we have
|∑
ti∈En
ti+1≤t
f(Xti+1)− f(Xti)−
∑
ti∈En
ti+1≤t
f ′(Xti)(Xti+1−Xti)−
∑
ti∈En
ti+1≤t
1
2f ′(Xti)(Xti+1
−Xti)2|
≤ w(δX ,MX)∑
ti∈En
ti+1≤t
(Xti+1−Xti)
2
where
δX = maxti∈En
ti+1≤t
|Xti+1−Xti |
and
MX = max0≤s≤t
|Xs|
and
w(δ,M) = sups,t∈[−M,M ]
|s−t|<δ
|f ′′(s)− f ′′(t)|.
Now, ∑
ti∈En
ti+1≤t
(f(Xti+1)− f(Xti)) → f(Xt)− f(X0)
and ∑
ti∈En
ti+1≤t
f ′′(Xti)(Xti+1−Xti)
2 →∫ t
0
f ′′(Xs)d〈X〉s
28
![Page 29: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/29.jpg)
and ∑
ti∈En
ti+1≤t
(Xti+1−Xti)
2 → 〈X〉t.
Moreover, w(δX ,MX) −→δ→0
0. Hence,
∑
ti∈En
ti+1≤t
f ′(Xti)(Xti+1−Xti)
has to converge as well and (7.2) holds.
Remark 7.5 (1) Theorem 7.4 gives a ”pathwise” version of Ito’s formula, ”without prob-
ability”.
(2) If X is BV on [0, t], 〈X〉t = 0 and (7.2) becomes
f(Xt)− f(X0) =
∫ t
0
f ′(Xu)dXu.
Short notation:
df(X) = f ′(X)dX “classical differential”
If X has continuous quadratic variation 〈X〉 and 〈X〉t 6= 0, then
df(X) = f ′(X)dX +1
2f ′′(X)d〈X〉 “Ito differential”
We have defined∫ t
0f ′(Xu)dXu for all continuous Xt with continuous quadratic variation
〈X〉t. Which stochastic processes Xt have a.s. continuous quadratic variation 〈X〉t?
Lemma 7.6 (i) Assume Xt has continuous quadratic variation 〈X〉t. Then, if f is
continuously differentable, f(Xt) has continuous quadratic variation
〈f(X)〉t =∫ t
0
f ′(Xs)2d〈X〉s
(ii) Let Xt = Mt+At, t ≥ 0, where Mt has quadratic variation 〈M〉t and At has quadratic
variation 〈A〉t = 0, then Xt has quadratic variation 〈X〉t and 〈X〉t = 〈M〉t.
(iii) Let f ∈ C1(R) and assume Xt has continuous quadratic variation 〈X〉t. Then,
Mt =∫ t
0f(Xs)dXs has continuous quadratic variation
〈M〉t =∫ t
0
f(Xs)2d〈X〉s.
(iv) Let f ∈ C1(R2) and assume Xt has continuous quadratic variation 〈X〉t. Then,
g(t) = f(Xt, t) has continuous quadratic variation
〈g〉t =∫ t
0
(∂f
∂x(Xs, s)
)2
d〈X〉s.
29
![Page 30: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/30.jpg)
Examples:
1. (Bt)t≥0 BM, α > 0, Zt = eαBt , t ≥ 0. Then, Lemma 7.4 (i) implies that 〈Z〉t =∫ t
0α2e2αBsds =
∫ t
0α2Z2
sds P-a.s. Note that 〈Z〉t is random (whereas 〈B〉t = t, ∀t P-a.s.).2. (Bt)t≥0 BM, α > 0,Mt = eαBt− 1
2α2t, t ≥ 0. Then, we know thatMt = 1+
∫ t
0αMsdBs, t ≥
0. Hence, Lemma 7.4 (iv) implies that 〈M〉t =∫ t
0α2M2
s ds P-a.s. Note that 〈M〉t is ran-dom (whereas 〈B〉t = t, ∀t P-a.s.).
Proof of Lemma 7.4:
(i) iX := Xti+1−Xti . Then
f(Xti+1)− f(Xti) = f(Xti)iX +Ri
and
|Ri| ≤ sups,u∈[ti,ti+1]
|f ′(Xs)− f ′(Xu)| · iX
Hence,
∑
ti∈En
ti+1≤t
(f(Xti+1)− f(Xti))
2 =∑
ti∈En
ti+1≤t
f ′(Xti)2(iX)2 +
∑
ti∈En
ti+1≤t
R2i + 2
∑
ti∈En
ti+1≤t
f ′(Xti)iX ·Ri
But ∑
ti∈En
ti+1≤t
f ′(Xti)2(iX)2 →
∫ t
0
f ′(Xs)2d〈X〉s
due to Theorem 7.3.
Rest: Exercise.
(ii) iM = Mti+1−Mti , iA = Ati+1
− Ati . Then
(iX)2 = (iM)2 + (iA)2 + 2iMiA.
Hence
∑
i:ti∈En
ti+1≤t
(iX)2 =∑
i:ti∈En
ti+1≤t
(iM)2 +∑
i:ti∈En
ti+1≤t
(iA)2 + 2
∑
i:ti∈En
ti+1≤t
iMiA → 〈M〉t,
since ∑
i:ti∈En
ti+1≤t
(iA)2 → 0
and ∑
i:ti∈En
ti+1≤t
iMiA → 0
(details: exercise).
30
![Page 31: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/31.jpg)
(iii) Due to (7.2), taking g such that g′ = f ,
Mt = g(Xt)− g(X0)−1
2
∫ t
0
g′′(Xs)d〈X〉s.
But∫ t
0g′′(Xs)d〈X〉s is continuous and BV as a function of t. Using (ii) and (i),
〈M〉t = 〈g(X)〉t =∫ t
0
g′(Xs)2d〈X〉s =
∫ t
0
f(Xs)2d〈X〉s
(iv) See [5], Remark 1.3.22.
-
Theorem 7.7 If (Xt)t≥0 is a continuous martingale with X0 = 0 and E[X2t ] < ∞,
∀t there exists a unique process 〈X〉t with 〈X〉0 = 0 which is continuous, adapted and
increasing, such X2t − 〈X〉t is a martingale. Moreover, Xt has quadratic variation 〈X〉t.
Proof: If (Bt)t≥0 is a BM and (Ht)t≥0 progressively measurable and Xt =∫ t
0HsdBs, then
〈X〉t =∫ t
0H2
sds P-a.s. (see Lemma 5.4 and Lemma 7.4(i).)
General case: See literature. -
〈X〉t is called quadratic variation or bracket or compensator of (Xt).
The first part of Theorem 7.6 has a discrete time analogue.
A stochastic process (Yn) is previsible w.r.t. a filtration (An) if Yn is An−1-measurable
∀n.
Theorem 7.8 (Doob decomposition)
Suppose (Xn)n=0,1,2,... is a martingale w.r.t. the filtration (An) and E[X2n] < ∞, ∀n.
Then, there is a unique previsible increasing process (An) with A0 = 0 such that (X2n −
An)n=0,1,2,... is a martingale w.r.t. (An).
Proof: Let A0 = 0 and define, for n ≥ 1, An = An−1 +E[X2n|An−1]−X2
n−1. Clearly, An is
An−1-measurable ∀n. Since (X2n)n=0,1,2,... is a submartingale w.r.t. An, An is increasing.
Further, E[X2n −An|An−1] = E[X2
n−1 −An−1|An−1] = X2n−1 −An−1 ⇒ (X2
n −An)n=0,1,2,...
is a martingale w.r.t. (An).
To prove the uniqueness, assume that (An) and (Bn) both fulfill the requirements. Then
An − Bn is a previsible martingale starting at 0, and we infer that An − Bn = E[An −Bn|An−1], hence, by induction, An = Bn ∀n. -
Definition 7.9 A continuous semimartingale w.r.t. a filtration (Ft)t≥0 is a process
(Xt)t≥0 which is adapted to (Ft)t≥0 and which has a decomposition Xt = X0 + Mt +
At, ∀t ≥ 0 P-a.s. where (Mt)t≥0 is a continuous martingale and (At)t≥0 is a continuous
adapted process which is BV (on each interval [0, t]).
If E[M2t ] < ∞, ∀t, we define for f ∈ C1(R)
∫ t
0
f(s)dXs :=
∫ t
0
f(s)dMs +
∫ t
0
f(s)dAs
31
![Page 32: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/32.jpg)
8 Cross-variation and Ito’s product rule
Definition 8.1 The cross-variation 〈X, Y 〉 is given by
〈X, Y 〉t = limn→∞
∑
ti∈En
ti+1≤t
(Xti+ −Xti)(Yti+ − Yti)
(provided that the limit exists).
Clearly, 〈X,X〉 = 〈X〉.
Lemma 8.2 The following statements are equivalent
(i) 〈X, Y 〉 exists and t → 〈X, Y 〉t is continuous.
(ii) 〈X + Y 〉 exists and t → 〈X + Y 〉t is continuous.
If (i) and (ii) hold, then
〈X, Y 〉 = 1
2(〈X + Y 〉 − 〈X〉 − 〈Y 〉) (8.1)
In particular, in this case∫ t
0g(s)d〈X, Y 〉s is well-defined for g ∈ C[0,∞).
Proof:
(Xti+1−Xti)(Yti+1
− Yti) =
1
2
(((Xti+1
+ Yti+1)− (Xti + Yti)
)2 − (Xti+1−Xti)
2 − (Yti+1− Yti)
2).
-
Example 8.3 Let (Xt)t≥0 and (Yt)t≥0 be independent BMs.
Claim:
For P-a.a. ω
〈X(ω), Y (ω)〉t = 0 ∀t
Proof:
〈X, Y 〉 = 1
2(〈X + Y 〉 − 〈X〉 − 〈Y 〉)
and we have 〈X〉t = t, ∀t, and 〈Y 〉t = t, ∀t P-a.s. Suffices to show 〈X + Y 〉 = 2t ∀t P-a.s.Zt :=
1√2(Xt + Yt), t ≥ 0 is again a BM (Proof: exercise). Therefore, 〈Z〉t = t ∀t P-a.s.
implying that 〈X + Y 〉 = 2t, ∀t P-a.s. -
Lemma 8.4 X, 〈X〉 continuous as before, f, g ∈ C1(R),
Yt =
∫ t
0
f(Xs)dXs, Zt =
∫ t
0
g(Xs)dXs
Then
〈Y, Z〉t =∫ t
0
f(Xs)g(Xs)d〈X〉s (8.2)
32
![Page 33: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/33.jpg)
Proof: Define F and G by F ′ = f , G′ = g, F (0) = G(0) = 0.
Then, Yt = F (Xt) + At where
At = −F (X0)−1
2
∫ t
0
F ′′(Xs)d〈X〉s
and Zt = G(Xt)− Bt where
Bt = −G(X0)−1
2
∫ t
0
G′′(Xs)d〈X〉s.
At, Bt are continuous with quadratic variation 0. Hence,
〈Y, Z〉t =1
2(〈Y + Z〉t − 〈Y 〉t − 〈Z〉t)
=1
2
(∫ t
0
(f(Xs) + g(Xs))2d〈X〉s −
∫ t
0
f(Xs)2d〈X〉s −
∫ t
0
g(Xs)2d〈X〉s
)
=
∫ t
0
f(Xs)g(Xs)d〈X〉s
-
Theorem 8.5 (Ito’s formula in d dimensions)
X = (X(1), ..., X(d)) : [0,∞) → Rd where X(i), 1 ≤ i ≤ d are continuous with continuous
quadratic variation 〈X(i)〉t and continuous cross-variations 〈X(i), X(k)〉t, 1 ≤ i, k ≤ d.
Let f ∈ C2(Rd). Then,
f(Xt)− f(X0) =
∫ t
0
(∇f, dXs) +1
2
∫ t
0
d∑
i,k=1
∂2f
∂xi∂xk(Xs)d〈X(i), X(k)〉s
where ∫ t
0
(∇f, dXs) =d∑
k=1
∫ t
0
∂f
∂xi
(Xs)dX(i)s .
Proof: Analogous to the proof of Theorem 6.1: Taylor formula forf(Xti+1)− f(Xti). See
literature. -
Note that Theorem 6.8 follows, using Example 8.3.
Corollary 8.6 (Ito’s product rule)
Assume that X, Y, 〈X〉, 〈Y 〉 and 〈X, Y 〉 are continuous. Then, ∀t > 0
Xt · Yt = X0 · Y0 +
∫ t
0
YsdXs +
∫ t
0
XsdYs + 〈X, Y 〉t (8.3)
Short notation:
d(X · Y ) = Y dX +XdY + d〈X, Y 〉 (8.4)
Proof: Apply Theorem 8.5 with d = 2, f(x, y) = x · y. -
33
![Page 34: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/34.jpg)
Example 8.7 (Ornstein-Uhlenbeck process)
Let α > 0 and (Bt)t≥0 a BM, x0 ∈ R. Then, we say that
Xt = e−αtx0 + e−αt
∫ t
0
eαsdBs, t ≥ 0 (8.5)
is an Ornstein-Uhlenbeck process with parameter α and starting point x0.
Claim:
(Xt)t≥0 solves the stochastic differential equationdXt = dBt − αXtdt
X0 = x0
i.e.
Xt = Bt −
∫ t
0αXsds
X0 = x0
Proof:
Xt = e−αtx0 + e−αt
∫ t
0
eαsdBs
dXt = −αx0e−αtdt− α(e−αt
∫ t
0
eαsdBs)dt+ e−αteαtdBt
= −αXtdt+ dBt ,
where we applied Ito’s product rule to e−αt∫ t
0eαsdBs. -
Claim: If X0d= N(0, 1
2α), X0 independent of (Bt)t≥0, then
Xt = e−αtX0 + e−αt
∫ t
0
eαsdBs
is a Gaussian process with E[Xt] = 0, ∀t and Cov(Xs, Xt) =12αe−α|t−s|. Hence, for any t,
E[X2t ] =
12α
⇒ Xtd= N(0, 1
2α).
In particular, for the process Zt = e−tBe2t , t ≥ 0, in Example 2.4, ( 1√2Zt)t≥0 is an Ornstein-
Uhlenbeck process with α = 1 and X0 = N(0, 12).
Proof: Clearly, E[Xt] = 0, ∀t. Assume s ≤ t. Then,
E[XsXt] = E[X20 ]e
−αte−αs + 0 + 0 + e−α(s+t)E[
∫ t
0
eαudBu
∫ s
0
eαvdBv]
For any martingale (Mt)t≥0, E[MtMs] = E[M2s ] for s ≤ t.(Proof: exercise).
Hence, applying this to the martingaleMt =∫ t
0eαudBu,
E[XsXt] =1
2αe−α(t+s) + e−α(t+s)E[(
∫ s
0
eαvdBv)2]
=1
2αe−α(t+s) + e−α(t+s)
∫ s
0
e2αvdv
=1
2αe−α(t+s) + e−α(t+s) 1
2α(e2αs − 1)
=1
2αe−α(t−s) .
-
34
![Page 35: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/35.jpg)
9 Stochastic Differential Equations
In this chapter we want to study stochastic differential equations (SDE) of the form:
X0 = ξ
dXt = σ(t, Xt) · dBt + b(t, Xt)dt (9.1)
(Xt)t≥0 = (X(1)t , X
(2)t , ..., X
(n)t )t≥0 is an unknown R-valued process,
(Bt)t≥0 = (B(1)t , B
(2)t , ..., B
(m)t )t≥0 is an m-dimensional Brownian motion and b(t, x) and
σ(t, x) are measurable functions of (t, x) ∈ R+ ×Rn. The drift vector b(t, x) is Rn-valued
and the dispersion matrix σ(t, x) is an n × m-matrix valued. Further ξ is a random
variable with values in Rn which is independent of (Bt)t≥0.
Definition 9.1 We say the SDE (9.1) has the strong solution (Xt)t≥0 if the following
conditions hold:
(i) (Xt)t≥0 is adapted to the filtration (F ξt )t≥0 where F ξ
t is the completion of σ(ξ, Bs :
s ≤ t) for t ≥ 0.
(ii) (Xt)t≥0 satisfies the following integral equation:
X(i)t = ξ(i) +
m∑
j=1
∫ t
0
σijdB(j)s +
∫ t
0
bi(s,Xs)ds (9.2)
for t ≥ 0, i = 1, 2, ...n.
Remark 9.2 (1) Processes which fulfill the integral equations in (9.2) and are defined on
a possibly enlarged probability space, but do not have to be adapted to the filtration
(F ξt )t≥0, are called weak solutions.
(2) The second property of a solution of a SDE includes the requirement that the integrals
in (9.2) are well–defined.
Example 9.3 For α, β ∈ R consider the SDE
X0 = 1
dXt = αXtdBt + βXtdt
for a one-dimensional BM (Bt)t≥0 . This SDE has the (unique) strong solution Xt =
exp(αBt + (β − α2
2)t).
Proof: exercise
Remark
The SDE from Example 9.3 is used for the Black-Scholes model.
35
![Page 36: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/36.jpg)
Remark 9.4 In the following Theorem we use
||x||2 =n∑
i=1
(xi)2 for x = (x1, ..., xn) ∈ Rn ,
||σ||2 =n∑
i=1
m∑
j=1
(σij)2 for σ = (σij)ij ∈ Rn×m .
Theorem 9.5 (Existence and uniqueness)
Assume that b : R+ × Rn and σ : R+ × Rn → Rn×m are measurable and satisfy the
Lipschitz-condition
||σ(t, x)− σ(t, y)||+ ||b(t, x)− b(t, y)|| ≤ K · ||x− y|| (9.3)
as well as the growth condition
||σ(t, x)||2 + ||b(t, x)||2 ≤ K2 · (1 + ||x||2) (9.4)
for a constant K > 0, for all t ≥ 0 and x, y ∈ Rn. Let ξ be a random variable with values
in Rn which is independent of the m-dimensional BM (Bt)t≥0 such that E[||ξ||2] < ∞.
Then the SDE (9.1) has a unique, continuous, strong solution (Xt)t≥0 such that
E
[∫ T
0
||Xt||2dt]< ∞ ∀T > 0 (9.5)
Remark 9.6 Uniqueness in Theorem 9.5 means that for two continuous solutions (Xt)t≥0,
(X ′t)t≥0 of the SDE (9.1) which fulfill the properties of Theorem 9.5 we have
P (Xt = X ′t ∀t ≥ 0) = 1.
Example 9.7 To illustrate that we need some conditions like (9.3) and (9.4) let us look
at the following examples from ODEs:
dXt
dt= (Xt)
2, X0 = 1
corresponding to b(x) = x2 (which does not satisfy (9.4)) has the unique solution
Xt =1
1− tfor 0 ≤ t < 1.
Another example:dXt
dt= 3(Xt)
2/3, X0 = 0
where b(x) = 3x2/3 does not satisfy (9.3) at x = 0 and
Xt =
0, for t ≤ a,
(t− a)3, for t > a.
are solutions for all a > 0.
36
![Page 37: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/37.jpg)
Sketch of the proof of Theorem 9.5:
Uniqueness:
Uses the following lemma
Lemma 9.8 Gronwall inequality
Let f : [0, T ] → R be integrable and A ∈ R, C > 0 such that
f(t) ≤ A+ C ·∫ t
0
f(s)ds ∀t ∈ [0, T ],
Then
f(t) ≤ A+ eCt ∀t ∈ [0, T ] .
Proof: see literature.
Consider two continuous solutions (Xt)t≥0, (X′t)t≥0 of the SDE (9.1) which fulfill condition
(9.5)
Xt −X ′t =
∫ t
0
(σ(s,Xs)− σ(s,X ′s)) · dBs +
∫ t
0
(b(s,Xs)− b(s,X ′s))ds
Therefore (using (a + b)2 ≤ 2a2 + 2b2),
||Xt −X ′t|| ≤ 2||
∫ t
0
(σ(s,Xs)− σ(s,X ′s)) · dBs||2 + 2||
∫ t
0
(b(s,Xs)− b(s,X ′s))ds||2
A short calculation for the first term (see Theorem 5.6 for n = m = 1) and an application
of the Cauchy-Schwarz inequality shows
E[||Xt−X ′t||2] ≤ 2
∫ t
0
E[||(σ(s,Xs)−σ(s,X ′s))||2]dBs+2t
∫ t
0
E[||(b(s,Xs)−b(s,X ′s))||2]ds
Therefore we have for f(t) := E[||Xt − X ′t||2] and C := 2(T + 1)K2 (for T > 0) due to
condition (9.3)
f(t) ≤ C ·∫ t
0
f(s)ds for 0 ≤ t ≤ T
and the Gronwall inequality implies f ≡ 0. We can conclude due to the continuity of
(Xt)t≥0, (X′t)t≥0 that we have
P (Xt = X ′t ∀t ≥ 0) = 1.
(using the fact that Q+ is countable and dense in R+).
Existence: We define for n ∈ N
X(n+1)t := ξ +
∫ t
0
σ(s,X(n)s ) · dBs +
∫ t
0
b(s,X(n)s )ds for t ≥ 0
inductively where X0t := ξ. Using the growth condition we can show inductively that for
T > 0 ∫ T
0
E[||X(n)s ||2]ds < ∞ ,
i.e. the stochastic integral is well–defined in every step of the recursion. One can show
that (X(n))n∈N0 converges a.s. uniformly on [0, T ] for every T > 0 and the limit solves
the SDE (9.1) and has the properties of Theorem 9.5 (more details can be found in the
literature). -
37
![Page 38: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/38.jpg)
10 Girsanov transforms
Goal:
Construction of a stochastic process (Xt)t≥0 with
dXt = b(Xt, t)dt+ dBt
X0 = x0
where (Bt)t≥0 is a BM, i.e.
Xt = x0 +
∫ t
0
b(Xs, s)ds+Bt (10.1)
Interpretation:
Deterministic process Xt withddtXt = b(Xt, t) with additional ”noise” (Bt)t≥0.
Possible strategies:
1) Construction of a ”strong” solution:
For a given BM (Bt)t≥0 on (Ω,A, P ), solve (10.1).
Example 10.1 (Ornstein-Uhlenbeck process)
dXt = dBt − αXtdt
X0 = x0
has the strong solution
Xt = e−αt(x0 +
∫ t
0
eαsdBs) , (10.2)
see Example 8.8.
Drawback: a strong solution does not always exist.
2) Construction of a ”weak” solution:
Find a BM (Bt)t≥0 and a process (Xt)t≥0 on some probability space (Ω,A, P ) such
that (10.1) holds, i.e. find (Xt)t≥0 such that
Bt = Xt − x0 −∫ t
0
b(Xs, s)ds
is a BM.
General Girsanov transformation
(Ω,A, P ) probability space, (Ft)t≥0 filtration, P probability measure on (Ω,A). Assume
that for all t, P |Ft≪ P |Ft
.
Then, there are Radon-Nikodym derivatives
Zt :=dP
dP|Ft
=dP |Ft
dP |Ft
(10.3)
38
![Page 39: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/39.jpg)
and (Zt)t≥0 is a martingale with respect to (Ft)t≥0 and P (see Probability Theory lecture
notes, 14.1.3.)
We always assume that (Zt)t≥0 has continuous paths and that
inft ≥ 0 : Zt(ω) = 0 = ∞ P -a.s.
Definition 10.2 (Mt)t≥0 is a local martingale (up to ∞) if there is a sequence of
stopping times T1 ≤ T2 ≤ ... such that
i) supn
Tn = ∞ P -a.s.
ii) (Mt∧Tn)t≥0 is a martingale, ∀n.
Each martingale is a local martingale but there are local martingales which are not mar-
tingales.
We will use the following important fact:
If M is a continuous local martingale and (Yt)t≥0 a continuous adapted process, then∫ t
0YsdMs, t ≥ 0 is again a continuous local martingale. See the literature, for instance
[5], Proposition 1.4.29.
Lemma 10.3 In the above setup, with Zt :=dPdP
|Ft, the following holds.
(i) For s ≤ t and a function gt which is Ft-measurable and bounded, we have
E[gt|Fs] =1
ZsE[gtZt|Fs] P -a.s.
where E denotes expectation with respect to P .
(ii) For M = (Mt)t≥0 continuous and adapted, the following two statements are equiva-
lent:
a) (Mt)t≥0 is a local martingale with respect to P .
b) (MtZt)t≥0 is a local martingale with respect to P .
Proof:
(i) Assume gs is Fs-measurable and bounded.
E[gsgt] = E[gsgtZt]
= E[gsE[gtZt|Fs]]
= E
[gsZsE[gtZt|Fs]
1
Zs
]
= E
[gs
1
ZsE[gtZt|Fs]
]
⇒ E[gt|Fs] =1
ZsE[gtZt|Fs]
39
![Page 40: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/40.jpg)
(ii) Assume that (MtZt)t≥0 is a martingale with respect to P . Then
E[Mt|Fs] =(i)
1
ZsE[MtZt|Fs] =
1
ZsMsZs = Ms, P-a.s.
⇒ (Mt)t≥0 is a martingale with respect to P , hence b) ⇒ a) for martingales.
Rest of the proof: Exercise. -
Theorem 10.4 Assume that (Mt) is a continuous local martingale with respect to P and
Zt is defined as in (10.3). Then,
Mt = Mt −∫ t
0
1
Zsd〈M,Z〉s
is a continuous local martingale with respect to P .
Short notation:
dM = dM +1
Zd〈M,Z〉
Proof: Due to Lemma 10.1, it suffices to show that (MtZt)t≥0 is a continuous local mar-
tingale with respect to P .
Let At =∫ t
01Zsd〈M,Z〉s. Then,
MtZt = Zt(Mt−At)Ito’s product rule
= Z0M0+
∫ t
0
(Ms−As)dZs−∫ t
0
ZsdAs+ 〈Z,M〉t (10.4)
The definition of A implies that
∫ t
0
ZsdAs = 〈Z,M〉t .
Due to (10.4), MtZt − M0Z0 is a stochastic integral, hence again a continuous local
martingale with respect to P . -
Remark 10.5
logZt = logZ0 +
∫ t
0
1
ZsdZs −
∫ t
0
1
Z2s
d〈Z〉s
Z continuous local martingale ⇒ Yt =∫ t
01ZsdZs is a continuous local martingale with
〈Y 〉 =∫ t
01Z2sd〈Z〉s, hence
logZt = logZ0 + Yt −1
2〈Y 〉t ⇒ Zt = Z0e
Yt− 12〈Y 〉t
and Z solves dZ = ZdY .
Using 〈M,Z〉t =∫ t
0Zsd〈M,Y 〉s, see Lemma 10.6 below we get from Theorem 10.3 that
dM = dM + d〈M,Y 〉 (10.5)
40
![Page 41: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/41.jpg)
Lemma 10.6 (Generalization of Lemma 8.4)
Asume that M and Y are local martingales with continuous quadratic variation 〈M〉 and〈Y 〉 and f, g ∈ C2(R2). Let
Vt =
∫ t
0
f(Ms, s)dMs, Rt =
∫ t
0
g(Ys, s)dYs .
Then,
〈V,R〉t =∫ t
0
f(Ms, s)g(Ys, s)d〈M,Y 〉s.
Proof: See literature. For M = Y , we recover Lemma 8.4.
Now, consider
Zt =
∫ t
0
ZsdYs, Mt =
∫ t
0
1dMs
Lemma 10.6 yields
〈M,Z〉t =∫ t
0
Zsd〈M,Y 〉s.
Girsanov transform on Wiener space
Let (Xt)0≤t≤1 be a BM with respect to P .
Theorem 10.7 (Girsanov)
Assume (bt)0≤t≤1 is progressively measurable and adapted and E[∫ 1
0b2sds] < ∞. Let Z1 :=
e∫ 10 bsdXs− 1
2
∫ 10 b2sds and assume
E[Z1] = 1 (10.6)
Define P by dPdP
|F1 = Z1.
Then, Bt = Xt −∫ t
0bsds, 0 ≤ t ≤ 1 is a BM with respect to P .
For the proof, we will need
Theorem 10.8 (Levy’s characterization of BM)
Assume (Bt) is a continuous local martingale with quadratic variation 〈B〉t = t. Then,
(Bt)t≥0 is a BM.
Proof: We use characteristic functions:
A probability measure µ on R is characterized by its Fourier transform:
µ(u) =
∫eiuxµ(dx) .
(Example: µ = N(0, σ2), µ(u) = e−12σ2u2
).
We show that Bt − Bs is independent of Fs, with law N(0, t − s). Ito’s formula for
f(x) = eiux (verify by separating real and imaginary parts) yields
eiuXt − eiuXs =
∫ t
s
iueiuXrdXr +1
2
∫ t
s
−u2eiuXrdr
41
![Page 42: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/42.jpg)
Divide by eiuXs , take conditional expectation w.r.t Fs
⇒ E[eiu(Xt−Xs)|Fs]− 1 = E
[∫ t
s
iueiu(Xr−Xs)dXr |Fs
]− 1
2E
[∫ t
s
u2eiu(Xr−Xs)dr|Fs
]
Take A ∈ Fs. Then,
E[eiu(Xt−Xs)IA]− P [A] = −1
2u2
∫ t
s
E[eiu(Xr−Xs)IA]dr
Let g(t) := E[eiu(Xt−Xs)IA]. Then, g(t) − g(s) = −12u2
t∫s
g(r)dr which implies g(t) =
g(s) exp(−12u2(t − s)). Hence E[eiu(Xt−Xs)IA] = P [A]e−
12u2(t−s) = P [A]µ(u) where µ =
N(0, t− s), ∀A ∈ Fs. This implies that Xt −Xs is independent of Fs with law N(0, t −s). -
Proof of Thm. 10.7: We have dPdP
|Ft= Zt = e
∫ t
0bsdXs− 1
2
∫ t
0b2sds hence Zt = eYt− 1
2〈Y 〉t with
Yt =∫ t
0bsdXs. Due to (10.4), dXt = dBt + d〈X,
∫ ·0bsdXs〉t where (Bt)t≥0 is a continuous
local martingale w.r.t. P .
Since 〈X,∫ ·0bsdXs〉t =
∫ t
0bsds (Lemma 8.4) we have dXt = dBt + btdt. Hence, P -a.s.
〈Bt〉 = t, Xt = Bt +∫ t
0bsds. ⇒ P -a.s. 〈B〉t = t, ∀t since P ≪ P .
Hence, Levy’s characterization of BM implies that (Bt)t≥0 is a BM w.r.t. P . -
Remark 10.9 Thm. 10.5 also applies to bt = b(Xu, u ≤ t) and yields a weak solution
of dXt = dBt − btdt, i.e. bt can depend on the whole ”history” up to time t. (10.5) and
(10.6) are only weak regularity assumptions.
42
![Page 43: Stochastic Analysis - TUM · Stochastic Analysis Prof. Dr. Nina Gantert LectureatTUMinWS2011/2012 June13,2012 ProducedbyLeopoldvonBonhorstandNinaGantert](https://reader034.vdocuments.net/reader034/viewer/2022050100/5f3f5ae2ed3df1784c453c68/html5/thumbnails/43.jpg)
Acknowledgements: We thank Michael Kochler and Yaroslav Yevmenenko-Shul’ts for
many corrections.
References
[1] Ioannis Karatzas and Steven Shreve: Brownian motion and stochastic cal-
culus, Springer, 2007.
[2] Achim Klenke: Probability Theory, Springer, 2008.
[3] Thomas Liggett: Continuous time Markov processes: An introduction,
American Mathematical Society, 2010.
[4] Peter Morters and Yuval Peres: Brownian motion, Cambridge University
Press, 2010.
[5] Michael Rockner: Introduction to stochastic analysis. Available online at
http://www.math.uni-bielefeld.de/∼roeckner/teaching1112/stochana.pdf
43