measure, problems and solutions
TRANSCRIPT
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 1/31
Measure Theory and
Filtering: Introduction and Applications
Lakhdar Aggoun and Robert J. Elliott
Solutions to SelectedProblems
Chapter 1Problem 1. Let F ii∈I be a family of σ-fields on Ω. Prove that
i∈I F i is
a σ-field.Solution:
(i) Ω belongs to all the F ii∈I and hence it belongs toi∈I F i.
(ii) If A1, A2, . . . is a countable sequence, then
An ∈ F i fori ∈ I . Therefore
An ∈
i∈I F i.(iii) Let A ∈
i∈I F i, then A ∈ F i for all i ∈ I which impliesthat Ac ∈ F i for all i ∈ I and Ac ∈
i∈I F i. Thereforei∈I F i
is a σ-field.
Problem 2. Let A and B be two events. Express by means of the indicatorfunctions of A and B,I A∪B, I A∩B, I A−B, I B−A, I (A−B)∪(B−A), where A − B = A ∩ B.Solution:
The following equalities are easily checked.I B−A = I A + I B − I A∩B,I A∩B = I AI B,I A−B = I A − I A∩B,I B−A = I B − I A∩B,I (A−B)∪(B−A) = I A−B + I B−A = I A − I A∩B + I B − I A∩B =I A + I B
−2I AI B.
Problem 3. Let Ω = Rm and define the sequences
C 2n = [−1, 2 +1
2n) and C 2n+1 = [−2 − 1
2n + 1, 1).
Show that lim sup C n = [−2, +2], lim inf C n = [−1, 1].Solution:
Recall that lim sup C n =n≥1
k≥nC k.
1
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 2/31
2
However,
k≥nC k =
k≥n[−1, 2 +
1
2k)
k≥n[−2 − 1
2k + 1, 1)
= [−1, 2 + 12n
)
[−2 − 12n + 1
, 1) = [−2 − 12n + 1
, 2 + 12n
) and
lim sup C n =n≥1[−2 − 1
2n + 1, 2 +
1
2n) = [−2, +2].
Similarly lim inf C n =n≥1
k≥n C k =
n≥1[−2, 1]
[−1, 2] =
[−1, +1]
Problem 4. Let Ω = (ω1, ω2, ω3, ω4) and P (ω1) =1
12, P (ω2) =
1
6, P (ω3) =
1
3, and P (ω4) =
5
12. Let
An =ω1, ω3 if n is odd,
ω2, ω4 if n is even.
Find P (lim sup An), P (lim inf An), lim sup P (An), and lim inf P (An)and compare.
Solution:
Using the definition, it is easily seen thatlim sup An = Ω and lim inf An = ∅.Also
P (An) = 5/12 if n is odd,
7/12 if n is even.
Hence lim sup P (An) = inf sup P (An) = 7/12 andlim inf P (An) = supinf P (An) = 5/12.Therefore P (lim sup An) ≥ lim sup P (An) andP (lim inf An) ≤ lim inf P (An).
Problem 5. Theorem 1.3.36.Let X n be a uniformly integrable family of random variables.Then E [lim inf X n] ≤ lim inf E [X n].Proof.
First note that for any A > 0: E [X n] = E [X nI (X n <
−A)] +
E [X nI (X n ≥ −A)]. By uniform integrability: for any > 0 wecan take A so large that supn |E [X nI (X n < −A)]| < .By Fatou’s lemma:lim inf E [X nI (X n ≥ −A)] ≥ E [lim inf X nI (X n ≥ −A)].But X nI (X n ≥ −A) ≥ X n, thereforelim inf E [X nI (X n ≥ −A)] ≥ E [lim inf X n].Hence lim inf E [X n] ≥ E [lim inf X n] − .
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 3/31
3
Since > 0 is arbitrary, the result follows.
Problem 7. Show that if X is a random variable, then σ|X | ⊆ σX .Solution:
It suffices to note that |X | = XI (X > 0) + XI (X ≤ 0). Clearlythe RHS is σX -measurable. Hence σ|X | being the smallestσ-field with respect to which |X | is measurable, is included inσX .
Problem 9. Show that the class of finite unions of intervals of the form(−∞, a], (b, c], and (d, ∞) is a field but not a σ-field.
Solution:
For instance the open interval (a, b) = ∞n=1(a, b−1/n] is not in
this collection despite the fact it contains each interval (a, b −1/n].
Problem 10. Show that a sequence of random variables X n converges (a.s)to X if and only if ∀ > 0 limm→∞ P [|X n−X | ≤ ∀ n ≥ m] =1.
Solution:
“⇒” Suppose X nas→ X . Let N = [ω : lim X n(ω) = X (ω)] and
Ω0 = Ω − N . Let Am =n≥m[|X n − X | ≤ ]. Am is monotoni-
cally increasing. Hence lim P (Am) = P (
Am).
If ω0
∈Ω, then there exists m(ω0, ) such that
|X n(ω0) − X (ω0)| ≤ , for all n ≥ m(ω0, ) (*)which implies that Ω0 ⊂
Am and P (
Am) = 1.“⇐” Let A =
m≥1 Am. If ω0 ∈ A then there exists m(ω0, )
such that (*) holds. LetA =
>0 =
n≥1 A1/n.
By assumption P (A1/n) = 1 for all n. Therefore P (A) =lim P (A1/n) = 1.So if ω0 ∈ A, then (*) holdsfor all = 1/n and hence X n(ω0) →X (ω0). Let Ω0 = A and N = Ω − A which implies P (N ) = 0.
Problem 11. Show that if
X k
converges (a.s) to X then
X k
converges
to X in probability but the converse is false.Solution:
Let A = [ω : lim X n(ω) = X (ω)]=
k≥1
N ≥1
n≥N [|X n − X | < 1/k].
Since P (A) = 1 it implies thatP (
N ≥1
n≥N [|X n − X | < ]) = 1 for all > 0.
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 4/31
4
But BN =n≥N [|X n − X | < ] is monotonically increasing.
Therefore lim P (BN ) = 1. However BN ⊂ [|X n − X | < ].
Hence lim P (|X n − X | < ) = 1.In fact the statement in Problem 10 is equivalent to: X n
as→ X if and only if P [supn≥m |X n − X | ≥ 0] → 0, n → ∞.(*)
On the other hand, by definition X nP → X if
P [|X n − X | ≥ 0] → 0, n → ∞.(**)Then it is clear that (*) is stronger than (**).
Problem 13. Let X n be a sequence of random variables with
P [X n = 2n] = P [X n =−
2n] =1
2n,
P [X n = 0] = 1 − 1
2n−1.
Show that X n converges (a.s) to 0 but E |X n| p does not con-verge to 0.
Solution:
To prove the a.s. convergence of X n to 0 it suffices to showthat
∞k=1 P (|X k| ≥ ) < ∞ is satisfied for all > 0. But
P (|X k| ≥ ) =1
2k−1and the result follows.
Now E |X n| p = 2np 12n−1
= 2n( p−1)+1 which does not converge to
0 for any p ≥ 1.
Problem 15. Suppose Q is another probability measure on (Ω, F ) such thatP (A) = 0 implies Q(A) = 0 (Q P ). Show that a.s.P conver-gence implies a.s.Q convergence.
Solution:
Suppose that X nas−P → X . then there exists N such that for all
ω /∈ N : X n(ω) → X (ω), and P (N ) = 0. But P (N ) = 0 im-
plies Q(N ) = 0 and X n(ω)
→X (ω) for all ω /
∈N so X n
as−Q
→X .
Problem 16. Prove that if F 1 and F 2 are independent sub-σ-fields and F 3 iscoarser than F 1, then F 3 and F 2 are independent.
Solution:
Since F 3 is coarser than F 1 then F 3 ⊂ F 1 and the result followsfrom the independence of F 1 and F 2.
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 5/31
5
Problem 17. Let Ω = (ω1, ω2, ω3, ω4, ω5, ω6), P (ωi) = pi = 1
6and the sub-σ-
fields
F 1 = σω1, ω2, ω3, ω4, ω5, ω6,
F 2 = σω1, ω2, ω3, ω4, ω5, ω6.
Show that F 1 and F 2 are not independent. What can be saidabout the sub-σ-fields
F 3 = σω1, ω2, ω3, ω4, ω5, ω6,
and
F 5 = σω1, ω4, ω2, ω5, ω3, ω6?
Solution:
F 1 and F 2 are not independent since if we know, for instance,that ω1, ω2 has occurred in F 1 then we also know that theevent ω1, ω2 in F 2 has occurred. This fact can be checked bydirect calculation using the definition.
Problem 18. Let Ω = (i, j) : i, j = 1, . . . , 6 and P (i, j) =1
36. Define the
quantity
X (ω) =∞k=0
kI (i,j):i+ j=k.
Is X a random variable? Find P X(x) = P (X = x), calculateE [X ] and describe σ(X ), the σ-field generated by X .Solution:
First note that X (ω) =12k=2 kI (i,j):i+ j=k and as the sum of
indicator functions of events in the σ-field generated by Ω, it isa random variable taking values in the set 2, 3, . . . , 12. Forx ∈ 2, 3, . . . , 12, P (X = x) = P (i + j = x).E [X ] =
12k=2 kP (i+ j = k). σ(X ) is the generated by the set of
events (1, 1), (1, 2), (2, 1), (1, 3), (3, 1), (2, 2), (1, 4), (4, 1),(2, 3), (3, 2), . . . , (6, 6), that is, it is generated by 11 atoms.
Problem 19. For the function X defined in the previous exercise, describe therandom variable P (A | X ), where A = (i, j) : i odd, j evenand find its expected value E [P (A | X )].
Solution:
P (A | X )(ω) =12k=2 P (A | X = k)I (X = k)(ω)
=12k=2
P (A
[X = k])
P (X = k)I (X = k)(ω).
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 6/31
6
E [P (A | X )] =
12k=2
P (A
[X = k])
P (X = k)P (X = k)
=12k=2
P (A
[X = k]) = P (A).
Problem 20. Let Ω be the unit interval (0, 1] and on it are given the followingσ-fields:
F 1 = σ(0,1
2], (
1
2,
3
4], (
3
4, 1],
F 2 = σ(0,1
4], (
1
4,
1
2], (
1
2,
3
4], (
3
4, 1],
F 3 = σ(0,
1
8], (
1
8 ,
2
8], . . . , (
7
8 , 1].Consider the mapping
X (ω) = x1I (0, 14](ω) + x2I ( 1
4, 12](ω) + x3I ( 1
2, 34](ω) + x4I ( 3
4,1](ω).
Find E [X | F 1], E [X | F 2], and E [X | F 3].Solution:
E [X | F 1] = E [x1I (0, 14] + x2I ( 1
4, 12] + x3I ( 1
2, 34] + x4I ( 3
4,1] | F 1]
= x1E [I (0, 14] | F 1] + x2E [I ( 1
4, 12] | F 1] + x3E [I ( 1
2, 34] | F 1]
+ x4E [I ( 34,1] | F 1].
For instance the first component in the sum gives:
x1E [I (0, 14 ] | F 1] = x1
P (0,14 ] | (0,
12 ]I ((0,
12 ])
+ P (0, 14
] | (12
, 34
]I ((12
, 34
]) + P (0, 14
] | (34
, 1]I ((34
, 1])
= x1
P (0, 1
4] | (0, 1
2]I ((0, 1
2])
=1
2I ((0,
1
2]).
Problem 21. Let Ω be the unit interval and ((0, 1], P ) be the Lebesgue mea-surable space and consider the following sub-σ-fields:
F 1 = σ(0,1
2], (
1
2,
3
4], (
3
4, 1],
F 2 = σ(0,1
4], (
1
4,
1
2], (
1
2,
3
4], (
3
4, 1].
Consider the mapping
X (ω) = ω.
Find E [E [X | F 1] | F 2], E [E [X | F 2] | F 1] and compare.Solution:
E [X | F 1] must be constant on the atoms of F 1 so thatE [X | F 1](ω) = x1I ((0, 1
2]) + x2I ((1
2, 34
]) + x3I ((34
, 1]). where
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 7/31
7
x1 =E [XI ((0, 1
2])]
P ((0, 12 ])
, x2 =E [XI ((1
2, 34
])]
P ((12 , 34 ])
, x3 =E [XI ((3
4, 1])]
P ((34 , 1]),
or
x1 =
(0, 1
2]
xdx
1/2= 1/4, x2 =
( 12, 34]
xdx
1/4= 5/8, x3 =
( 34,1]
xdx
1/4=
7/8Hence
E [X | F 1](ω) =1
4I ((0,
1
2])(ω) +
5
8I ((
1
2,
3
4])(ω) +
7
8I ((
3
4, 1])(ω),
which is a F 1-measurable random variable.E [E [X | F 1] | F 2] = E [X | F 1].
For the derivation of E [E [X | F 2] | F 1] see the steps in problem20.
Problem 22. Consider the probability measure P on the real line such that:
P (0) = p, P ((0, 1)) = q, p + q = 1,
and the random variables defined on Ω = Rm
X 1(x) = 1 + x, X 2(x) = 0I x≤0 + (1 + x)I 0<x<1 + 2I x≥1,
X 3(x) =+∞
k=−∞(1 + x + k)I k≤x≤k+1.
Is there any a.s.P equality between X 1, X 2 and X 3?Solution:
When k = 0, X 3(x) = 1 + x and hence X 1 = X 3 a.s.P .X 2(0) = 0 = X 1(0) = 1 with probability pand X 2(0) = 0 = X 3(0) = 1 with probability p.
Problem 23. Let X 1, X 2 and X 3 be three independent, identically distributed(i.i.d.) random variables such that P (X i = 1) = p = 1−P (X i =0) = 1 − q. Find P (X 1 + X 2 + X 3 = s | X 1, X 2).Solution:
See Example 1.4.3.
Problem 25. On Ω = [0, 1] and P being Lebesgue measure show thatX = x1I (0, 1
2] + x2I ( 1
2,1] and Y = y1I (0, 1
4]∪( 3
4,1] + y2I ( 1
4], 34]
are independent.Solution:
σ(X ) = σ(0, 12
], (12
, 1] and σ(Y ) = σ(0, 14
] ∪ (34
, 1], (14
], 34
].
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 8/31
8
Hence direct calculations give the result.
Chapter 2
Problem 2. Suppose that at time 0 you have $a and your component has$b. At times 1, 2, . . . you bet a dollar and the game endswhen somebody has $0. Let S n be a random walk on theintegers . . . , −2, −1, 0, +1, +2, . . . with P (X = −1) = q,P (X = +1) = p. Let α = inf n ≥ 1 : S n = −a, +b, i.e. thefirst time you or your component is ruined, then
S n∧α
∞n=0 is
the running total of your profit. Show that if p = q = 12
S n∧αis a bounded martingale with mean 0 and that the probability
of your ruin isb
a + b.
Show that if the game is not fair ( p = q) then S n is not a
martingale but Y n=
q
p
S n is a martingale. Find the probabil-
ity of your ruin and check that if a = b = 500, p = .499 and
q = .501 then P (ruin) = .8806 and it is almost 1 if p =1
3.
Solution:
From Theorem 2.3.11 we know that S n∧α∞n=0 is a martingaleand |S n∧α| is bounded by max(a, b).Also S α = aI (S α = a) − bI (S α = b). Taking expectation onboth side we find0 = E [S α] = aP (S α = a) − bP (S α = b)= aP (S α = a) − b(1 − P (S α = a))
which gives P (S α = a) =b
a + b.
If ( p = q) then it is easy to check that S n is not a martingale.Clearly E [Y n] < ∞.
E [Y n+1| F
n] = E [q
pS n+1 | q
pS n] = q
pS nE [q
pXn+1
] =
Y n
q
p
p +
q
p
−1q
= Y n which shows that Y n is a mar-
tingale.1 = E [S α] = (q/p)aP (S α = a) + (q/p)−b(1 − P (S α = a))
which gives P (S α = a) =(q/p)b − 1
(q/p)a+b − 1.
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 9/31
9
Problem 3. Show that if X n is an integrable, real-valued process, withindependent increments and mean 0 then it is a martingale
with respect to the filtration it generates and if in addition X 2n
is integrable, X 2n − E (X 2n) is a martingale with respect to thesame filtration.
Solution:
The first assertion is obvious.Let Z n = X 2n − E (X 2n). Clearly the Z n are integrable.E [Z n+1 − Z n | F n] = E [X 2n+1 − X 2n | F n] − E [X 2n+1] + E [X 2n].We must show thatE [X 2n+1 − X 2n | F n] = E [X 2n+1] − E [X 2n].Let X n+1 − X n = ∆X n+1.We see that X 2n+1 − X 2n = X n+1∆X n+1 + X n∆X n+1. Therefore
E [X 2n+1 − X
2n | F n] = E [X n+1∆X n+1 | F n] + X nE [∆X n+1 | F n]
= E [X n+1∆X n+1 | F n] using the assumptions on X n.NowX n+1∆X n+1 = (X n+1−X n+X n)∆X n+1 = (∆X n+1)2+X n∆X n+1.ThereforeE [X n+1∆X n+1 | F n] = E [(∆X n+1)2 + X n∆X n+1 | F n]= E [(∆X n+1)2] using again the assumptions on X n.Now E [(∆X n+1)2] = E [X 2n+1 + X 2n − 2X n+1X n]= E [X 2n+1] + E [X 2n] − 2E [X nE [X n+1 | F n]]= E [X 2n+1] + E [X 2n]−2E [X 2n] = E [X 2n+1]−E [X 2n] using the factthat
X n
is a martingale. This finishes the proof.
Problem 4. Let X n be a sequence of i.i.d random variables with E [X n] =0 and E [X 2n] = 1. Show that S 2n− n is an F n = σX 1, . . . , X n-martingale, where S n =
ni=1 X i.
Solution:
Let Y n = S 2n − n. Clearly the Y n are adapted and integrable.E [Y n+1 | F n] = E [S 2n+1 − (n + 1) | F n]= E [Y n + X 2n+1 − 1 + 2X n+1
ni=1 X i | F n]
= Y n + E [X 2n+1 − 1 | F n] + 2E [X n+1ni=1 X i | F n].
Using the independence assumption of X n,E [X 2n+1 − 1 | F n] = E [X 2n+1 − 1] = 1 − 1 = 0and
E [X n+1ni=1 X i | F n] =
ni=1 X iE [X n+1] = 0.
Hence E [Y n+1 | F n] = Y n and the result follows.
Problem 5. Let yn be a sequence of independent random variables withE [yn] = 1. Show that the sequence X n =
nk=0 yn is a martin-
gale with respect to the filtration F n = σy0, . . . , yn.
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 10/31
10
Solution:
Since the yn are integrable so are the X n.
E [X n+1 | F n] =n
k=0 ynE [yn+1 | F n] =n
k=0 yn = X n. There-fore X n is a martingale.
Problem 7. Show that two (square integrable) martingales X and Y areorthogonal if and only if X 0Y 0 = 0 and the process X nY n isa martingale.
Solution:
“⇒” follows from Theorem 3.2.6.“⇐” follows directly from the definition of the:X, Y n = E [X 0Y 0] +
ni=1 E [(X i − X i−1)(Y i − Y i−1) | F i−1] =
ni=1 E [X iY i − X iY i−1 − X i−1Y i + X i−1Y i−1 | F i−1]
=n
i=1[X i−1Y i−1 − X i−1Y i−1 − X i−1Y i−1 + X i−1Y i−1] = 0.Therefore the (square integrable) martingales X and Y are or-thogonal.
Problem 9. Let Bt be a standard Brownian motion process (B0 = 0,a.s., σ2 = 1). Show that the conditional density of Bt fort1 < t < t2
P (Bt ∈ dx | Bt1 = x1, Bt2 = x2)
is a normal density with mean and variance
µ = x1 + x2 − x1
t2 − t1(t − t1), σ2 = (t2 − t)(t − t1)
t2 − t1.
Solution:
The known distribution and independence of the incrementsBt1, Bt − Bt1, Bt2 − Bt lead to the joint distribution
f (x1, x , x2) =1√
2πt1exp
− x2
1
2t1
1
2π(t − t1)exp
−(x − x1)2
2(t − t1)
1
2π(t2 − t)exp
−(x2 − x)2
2(t2 − t)
.
Dividing by the density
f (x1, x2) =1√
2πt1exp
− x2
1
2t1
1
2π(t2 − t1)exp
−(x2 − x1)2
2(t2 − t1)
and after a bit of algebra yields the desired result.
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 11/31
11
Problem 12. Let N t be a standard Poisson process and Z 1, Z 2 . . . a sequenceof i.i.d. random variables such that P (Y i = 1) = P (Y i = −1) =
12
. Show that the process
X t =N ti=1
Z i
is a martingale with respect to the filtration F t = σX s, s ≤t.
Solution:
Using Wald’s equationE [|X t|] = E [|
N ti=1 Z i|] ≤ E [
N ti=1 |Z i|]
= E [N t]E [|Z i|] < ∞. Therefore X t is integrable.Now for s ≤ tE [X t | F s] = E [
N si=1 Z i + Z N t | F s] = X s + E [Z N t | F s].
However E [Z N t | F s] = E [E [Z N t | F s
N t] | F s] = 0 and theresult follows.
Problem 13. Show that the process B2t − t, F Bt is a martingale where B
is the standard Brownian motion process and F Bt its naturalfiltration.
Solution:
Clearly B2t − t is adapted and integrable. Let s ≤ t.
E [B2t − t | F
Bs ] = B
2s − s + E [B
2t − t − B
2s + s | F
Bs ].
But E [B2t − t − B2
s + s | F Bs ] = s − t + E [B2t − B2
s | F Bs ] =s − t + t − s = 0.Hence B2
t − t, F Bt is a martingale.
Problem 14. Show that the process (N t − λt)2 − λt is a martingale whereN t is a Poisson process with parameter λ.
Solution:
Clearly (N t − λt)2 − λt is adapted and integrable.Let Lt = (N t − λt)2 − λt, M t = N t − λt and recall that M t is amartingale.
NowE [Lt − Ls | F N s ] = E [M 2t − M 2s | F N s ] − λt − λs,E [M 2t − M 2s | F N s ] = E [(M t − M s)(M t + M s) | F N s ] =E [M t(M t − M s) | F N s ] because M t is a martingale and M s isF N s -measurable. AlsoE [M t(M t − M s) | F N s ] = E [M 2t | F N s ] − M 2s ,E [M 2t | F N s ] = E [N 2t | F N s ] − 2λtM s − (λt)2,
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 12/31
12
E [N 2t | F N s ] = E [(N t − N s)(N t + N s) | F N s ] + N 2s ,E [(N t − N s)(N t + N s) | F N s ]
= E [(N t − N s + N s)(N t − N s) | F N s ] + N sE [(N t − N s) | F
N s ].Using the independent increment property of N
E [(N t − N s + N s)(N t − N s) | F N s ]= E [(N t − N s)
2] + N sE [(N t − N s)]= λt − λs + (λt − λs)2 + M s(λt − λs) + λs(λt − λs),and N sE [(N t − N s) | F N s ] = N sM s + N sλt − N 2s= M 2s + M sλs + M sλt + λsλt − N 2s .putting all this together yieldsE [Lt − Ls | F N s ] = 0 and the result follows.
Problem 15. Show that the process
I t = t
0
f (ω, s)dM s
is a martingale. Here f (.) is an adapted, bounded, continuoussample paths process and M t = N t − λt is the Poisson martin-gale.
Solution:
First note that the integral is just a finite sum w.p.1 in anyfinite interval and since f is bounded I t is integrable.E [I t | F N s ] = I s + E [
s<u≤t f (u−)∆M u | F N s ], and
E [s<u≤t f (u−)∆M u | F N s ]
=s<u≤t E [f (u−)E [∆M u | F N u−] | F N s ]. Since M is martingale
E [∆M u | F N u−] = 0 which finishes the proof.
Problem 16. Referring to Example 2.4.4 define the processes:
N srn =
nk=1
I (ηk−1=s,ηk=r) =
nk=1
X k−1, esX k, er, (0.1)
and
Orn =nk=1
I (ηk=r) =nk=1
X k, er. (0.2)
Show that 0.1 and 0.2 are increasing processes and give theirDoob decompositions.
Solution:
Being somes of indicator functions, N srn and Orn are clearly in-creasing.Recall the representation X n = ΠX n−1 + M n.Subsituting this form of the Markov chain X in 0.1 and 0.2
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 13/31
13
gives their Doob decompositions.
Problem 18. Let α be a stopping time with respect to the filtration X n, F n.Show that
F α = A ∈ F ∞ : A ∩ ω : α(ω) ≤ n ∈ F n ∀ n ≥ 0.
is a σ-field and that α is F α-measurable.Solution:
The first assertion is easy.To show that α is F α-measurable note that for all k, nα ≤ kα ≤ n = α ≤ min(k, n) ∈ F min(k,n) ⊂ F n fromwhich the result follows.
Problem 19. Let X n be a stochastic process adapted to the filtration F nand B a Borel set. Show that
αB = inf n ≥ 0, X n ∈ Bis a stopping time with respect F n.
Solution:
Since[αB ≤ k] = [X 0 ∈ B]
[[X 0 /∈ B, X 1 ∈ B]
. . .
[[X 0 /∈ B, X 1 /∈ B , . . . , X k−1 /∈ B, X k ∈ B] ∈ F k, therefore αB
is a stopping time with respect F n.
Problem 20. Show that if α1, α2, are two stopping times such that α1 ≤ α2
(a.s.) then F α1 ⊂ F α2.Solution:
Let A ∈ F α1 .A
[α2 ≤ n] = (A
[α1 ≤ n])
[α2 ≤ n] ∈ F n for all n.
Problem 21. Show that if α is a stopping time and a is a positive constant,then α + a is a stopping time.
Solution:
If a > n then [α + a
≤n] = [α
≤n
−a] =
∅ ∈ F n.
If a ≤ n then [α + a ≤ n] = [α ≤ n − a] ∈ F n−a ⊂ F n, thereforeα + a is a stopping time. Note that if a < 0, α + a is a notstopping time.
Problem 22. Show that if αn is a sequence of stopping times and thefiltration F t is right-continuous then inf αn, liminf αn andlim sup αn are stopping times.
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 14/31
14
Solution:
Since [inf αn < t] = [αn < t] ∈ F t,[lim inf αn < t] = [supn≥1 inf k≥n αk < t]=
m≥1
n≥1
k≥n[αk > t + 1/m] ∈ F t,
[lim sup αn < t] = [inf n≥1 supk≥n αk < t]=
m≥1
n≥1
k≥n[αk < t − 1/m] ∈ F t,
therefore by the right continuity of F t inf αn, liminf αn andlim sup αn are stopping times
Chapter 3
Problem 1. Let X 1, X 2, . . . be a sequence of i.i.d N (0, 1) random variables
and consider the process Z 0 = 0 and Z n =nk=1 X k.
Show that
[Z, Z ]n =nk=1
X 2k ,
Z, Z n = n,
E ([Z, Z ]n) = E (nk=1
X 2k) = n.
Solution:
Direct substitution in the formula [Z, Z ]n =nk=1(Z k − Z k−1)
2
gives the result.Z, Z n =
nk=1 E [(Z k−Z k−1)2 | F k−1] =
nk=1 E [X 2k | F k−1] =n
k=1 E [X 2k ] = n using the independence and ditribution as-sumptions of X 1, X 2, . . . .
Problem 2. Show that if X and Y are (square integrable) martingales, thenXY − X, Y is a martingale.
Solution:
Clearly XY − X, Y is adapted and integrable.E [X nY n
− X, Y
n
| F n−1] =
− X, Y n−1 − E [(X n − X n−1)(Y n − Y n−1) | F n−1]+ E [X nY n | F n−1].But X nY n = (X n − X n−1 + X n−1)(Y n − Y n−1 + Y n−1)= (X n − X n−1)(Y n − Y n−1) + X nY n−1 − X n−1Y n−1+ Y nX n−1 − X n−1Y n−1 + X n−1Y n−1.HenceE [X nY n | F n−1] = E [(X n − X n−1)(Y n − Y n−1) | F n−1]
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 15/31
15
+ X n−1Y n−1 which yields the result.
Problem 3. Establish the identity:
[X, Y ]n =1
2([X + Y, X + Y ]n − [X, X ]n − [Y, Y ]n).
Solution:
[X + Y, X + Y ]n = (X 0 + Y 0)2 +ni=1(X i + Y i − X i−1 − Y i−1)2
= (X 0 + Y 0)2 +ni=1(X i − X i−1)2 +
ni=1(Y i − Y i−1)2
+ 2ni=1(X i − X i−1)(Y i − Y i−1)
= [X, X ]n + [Y, Y ]n + 2[X, Y ]n from which the result follows.
Problem 6. Show that if X n is a square integrable martingale then X 2 −X, X is a martingale.Solution:
See the solution to problem 2.
Problem 7. Find [B + N, B + N ]t and B + N, B + N t for a Brownianmotion process Bt and a Poisson process N t.
Solution:
From the identities[B +N, B +N ]t = [B, B]t+[N, N ]t+2[B, N ] = [B, B]t+[N, N ]tandB + N, B + N t = B, Bt + N, N t + 2B, N = B, Bt +
N, N tRecall that [X, X ]t = X c, X ct +s≤t(∆X s)
2,
[N, N ]t =
0≤s≤t(∆N s)2 = N t, N, N t = λt, and
[B, B]t = B, Bt = t.Therefore[B + N, B + N ]t = t + N t andB + N, B + N t = t + λt.
Problem 9. Let f be a deterministic square integrable function and Bt aBrownian motion. Show that the stochastic integral
t0
f (s)dBs
is a normally distributed random variable with distributionN (0,
t0 f 2(s)ds).
Solution:
Since f is square integrable there exists a sequence of simplefunctions f n such that
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 16/31
16
t
0|f (s) − f n(s)|2ds → 0, (n → ∞), and
t0
f n(s)dBsL2
→ t0
f (s)dBs. (*)Clearly each of the random variables t0
f n(s)dBs =k f n(tk−1)(Btk − Btk−1)
has distribution N (0, t0
f 2n(s)ds).
From (*) t0
f 2n(s)ds → t0
f 2(s)ds and the result follows.
Problem 10. Show that if t0
E [f (s)]2ds < ∞,
the Ito process
I t = t0 f (s)dBs
has orthogonal increments, i.e., for 0 ≤ r ≤ s ≤ t ≤ u
E [(I u − I t)(I s − I r)] = 0.
Solution:
Using the fact that I is a martingale we obtainE [(I u − I t)(I s − I r)] = E [(I s − I r)E [(I u − I t) | F s]]= E [(I s − I r)(I s − I s)] = 0.
Problem 11. Show that t0
(B2s − s)dBs = B
3
t
3− tBt.
Solution:
Using the Ito formulaB3t = 3
t0
B2sdBs + 3
t0
Bsds or using integration by part
B3t /3 =
t0
B2sdBs + tBt −
t0
sdBs and the result follows.
Problem 16. If N is a standard Poisson process show that the stochasticintegral
t
0
N sd(N s−
s)
is not a martingale. However, show that t0
N s−d(N s − s)
is a martingale. Here N t is a Poisson process. (Note that atany jump time s, N s− = N s − 1.)
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 17/31
17
Solution:
t
0 N sd(N s − s) = t
0 N sdN s − t
0 N sds.
Since dN s = ∆N s is either 0 or 1, we have t0
N sdN s = N T 1 + N T 2 + · · · + N t
= 1 + 2 + · · · + N t =N t(N t + 1)
2(*),
where T 1, T 2, . . . are the jump times of N .Let s ≤ t, then in view of (*)
E [ t0
N u−d(N u − u) − [ s0
N u−d(N u − u) | F s]= E [
ts
N u−d(N u − u) | F s] = E [ ts
N u−dN u − ts
N udu | F s].In view of (*)
E [ t
sN udN u | F s] = E [
N t(N t + 1)
2− N s(N s + 1)
2| F s]
= (1/2)E [(N t − N s)2
+ 2N s(N t − N s) + N t − N s | F s]= (1/2)[E (N t − N s)
2 + 2N sE (N t − N s) + E (N t − N s)]= (1/2)[t − s + (t − s)2 + (2N s + 1)(t − s)]= (t − s)/2[2N s + t − s + 2].
E [ ts N udu | F s] = E [
ts
(N u − N s + N s)du | F s]=
ts
E (N u − N s)du + N s(t − s)
= ts
(u − s)du + N s(t − s) = (t − s)/2[2N s + t − s].SinceE [
ts N udN u | F s] − E [
ts N udu | F s] = 0
it follows that t
0N ud(N u − u) is not a martingale.
However t0 N s−dN s = 0 + N T 1 + N T 2 + · · · + N t − 1
=N t(N t − 1)
2,
(or using the product rule t0
N s−dN s = (1/2)(N 2t − [N, N ]t)).
E [ ts
N u−du | F s] = (t − s)/2[2N s + t − s].
Hence E [ ts
N u−dN u | F s] − E [ ts
N u−du | F s] = 0.
It follows that t0 N ud(N u − u) is a martingale.
Problem 17. Prove that
t0
2N s−dN s = 2N t − 1.
Here N t is a Poisson process.Solution: t
02N s−dN s = 20 + 21 + · · · + 2N t−1 = 2N t − 1.
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 18/31
18
Problem 19. Show that the unique solution of
xt = 1 + t0 xs−dN s
is given by xt = 2N t. Here N t is a Poisson process.Solution:
The result is obtained by using the stochastic exponential for-mulaxt = eN t−
1
2N c,V ct
s≤t(1 + ∆N s)e−∆N s
= eN ts≤t e−∆N s
s≤t(1 + ∆N s)
= eN t−N t2N t = 2N t .
Problem 21. Show that the linear stochastic differential equation
dX t = F (t)X tdt + G(t)dt + H (t)dBt, (0.3)
with X 0 = ξ has the solution
X t = Φ(t)
ξ +
t0
Φ−1(s)G(s)ds +
t0
Φ−1(s)H (s)dBs
. (0.4)
Here F (t) is an n × n bounded measurable matrix, H (t) is ann × m bounded measurable matrix, Bt is an m-dimensionalBrownian motion and G(t) is an Rmn-valued bounded measur-able function. Φ(t) is the fundamental matrix solution of thedeterministic equation
dX t = F (t)X tdt.Solution:
Differentiating (0.4) shows that X is solution of (0.3).
Problem 24. Show that the linear stochastic differential equation
dX t = −αX tdt + σdBt,
with E |X 0|2 = E |ξ|2 < ∞ has the solution
X t = e−αtξ + σ
t0
e−α(t−s)dBs,
and
µt = E [X t] = e−αtE [ξ],
P (t) = V ar(X t) = e−2αtV ar(ξ) +σ2(1 − e−2αt)
2α.
Solution:
Direct calculation yield
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 19/31
19
d(e−αtξ + σ
t
0e−α(t−s)dBs)
=
−αe−αtξdt +
−ασ t0 e−α(t−s)dBs + σe−αteαtdBt
= −αX tdt + σdBt.Clearly E [X t] = e−αtE [ξ] because E
t0
e−α(t−s)dBs
= 0.
We also have E t
0e−α(t−s)dBs
2=
t0
e−2α(t−s)ds
from which the variance of X follows.
Problem 26. Suppose for θ ∈ Rm
X θt = eθM t−
1
2θ2At
is a martingale, and suppose there is an open neighborhood I of θ = 0 such that for all θ ∈ I and all t (P a.s.),
(i) |X θt | ≤ a,
(ii) |dX θtdθ
| ≤ b,
(iii) |d2X θtdθ2
| ≤ c.
Here a, b, c are nonrandom constants which depend on I , butnot on t. Show that then the processes M t and M 2t − Atare martingales.
Solution:
Take s ≤ t and A ∈ F s. Then using (i) and (ii) A
E
dX θtdθ
θ=0
| F s
dP = E
A
dX θtdθ
θ=0
dP | F s
= E
d
dθ
A
X θt dP
θ=0
| F s
=
d
dθ
A
X θsdP
θ=0
= A
dX θsdθ
θ=0
dP.
That is,
E
dX θtdθ
θ=0
| F s
=
dX θsdθ
θ=0
P a.s.
so E [M t| F
s] = M s a.s.The second assertion follows similarly using (iii), because
d2X θtdθ2
θ=0
= M 2t − At.
Chapter 4
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 20/31
20
Problem 1. Consider the probability space ([0, 1], B([0, 1]), λ), where λ isthe Lebesgue measure on the Borel sigma-field B([0, 1]). Let P
be another probability measure carried by the singleton 0, i.eP (0) = 1. Let
π1 = [0,1
2], (
1
2, 1],
π2 = [0,1
4], (
1
4,
3
4], (
3
4, 1], . . . ,
πn = [0,1
2n], . . . , (1 − 1
2n, 1].
Define the random variable
Λn([0,1
2n]) =
P ([0, 2−n])
λ([0, 2−n])=
2n,
0 elswhere in [0, 1].
Show that the sequence Λn is a positive martingale (with re-spect to the filtration generated by the partitions πn) such thatE λ[Λn] = 1 for all n but lim Λn = 0 λ-almost surely.
Solution:
Clearly Λn is adapted, λ-integrable and positive for all n .MoreoverE [Λn] = 2nλ([0, 2−n]) = 2n2−n = 1.E [Λn+1 | F n] = 2n+1E [I ([0, 2−n−1]) | F n]
= 2n+1λ([0, 2−n−1]
[0, 2−n])
λ([0, 2−n])I ([0, 2−n]) = 2nI ([0, 2−n]).
Therefore Λ is a martingale.To prove the λ-a.s. convergence of Λn to 0 it suffices toshow that
∞k=1 λ(Λn ≥ ) < ∞ is satisfied for all > 0. But
λ(Λn ≥ ) =1
2nand the result follows.
Problem 2. Prove Lemma 4.2.14.Solution:
Using Bayes’ Theorem we haveE [Y k+1 | Gk] = E [Y k+1 | Gk]= E [Y k+1 | Gk]/E [Λk+1 | Gk]
= ΛkE [λk+1Y k+1 | Gk]/ΛkE [λk+1 | Gk]= E [λk+1Y k+1 | Gk]
= E M
i=1 Mcik+1Y k+1, f iY k+1 | Gk
=M i=1 Mcik+1f iE
Y k+1, f i | Gk
=
M i=1 Mcik+1f i(1/M ) =
M i=1 cik+1f i = ck+1 = CX k,
which finishes the proof.
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 21/31
21
Problem 3. Consider the order-2 Markov chain X n, 1 ≤ n ≤ N dis-cussed in Example 2.4.6. Define a new probability measure P on
(Ω,F n such that P (X n = ek | X n−2 = ei, X n−1 = e j) = pk,ij.
Solution:
Let Λ0 = 1. Define
λn(ω) =kij
pk,ij pk,ij
I (Xn=ek,Xn−1=ej,Xn−2=ei)(ω), ΛN =N n=1 λn.
It is easy to show that Λn is an F n, P -martingale.We have to show that and under P the order-2 Markov chainX has transitions probabilities pk,ij.
Using Bayes’ Theorem write
P [X n = e | F n−1] = E [I (Xn=e) | F n−1] =E [I (Xn=e)Λn
| F n−1]
E [Λn | F n−1]
=Λn−1E [I (Xn=e)λn | F n−1]
Λn−1E [λn | F n−1]
=E [I (Xn=e)λn | F n−1]
E [λn | F n−1]
= E [I (Xn=e)λn | F n−1],
since E [λn] = 1]. Therefore
P [X n = e| F
n−1] = ij
p,ij
p,ijI (Xn−1=ej ,Xn−2=ei)P [X n = e
|X n−2 = ei, X n−1 = e j]
=ij
p,ij p,ij
I (Xn−1=ej ,Xn−2=ei)P [X n = e | X n−2 = ei, X n−1 = e j]
=ij
p,ij p,ij
I (Xn−1=ej ,Xn−2=ei) p,ij
= pe,Xn−1,Xn−1,
which gives the result.
Problem 6. Show that the exponential martingale
Λt
given by (4.7.5) isthe unique solution of
Λt = 1 +i,j
t0
Λs−(λijs )−1(λij
s − λijs )(dJ ijs − λijs ds).
Solution:
Equation (4.7.5) is
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 22/31
22
Λt = i= j exp
t
0 logλij
s
λijsdJ ijr −
t
0
(λij
s − λijs )dr
.
Let’s simply the jumping part of Λti= j exp
t0
logλij
s
λijsdJ ijr
=
i= j exp
s log
λij
s
λijs∆J ijr
=i= j
0<s≤t
λij
s
λijs
∆J ijr
Write Λt = e
− t0(λ
ijs −λ
ijs
ds)
Y t, where
Y t =
i= j0<s≤t
λij
s
λijs
∆J ijs
, and Λt = f (t, Y t).
Using Ito rule
f (t, Y t) = 1 +
t0
∂f (s, Y s−)
∂sds +
t0
∂f (s, Y s−)
∂Y dY s
+0<s≤t
f (s, Y s) − f (s, Y s−) − ∂f (s, Y s−)
∂Y ∆Y s
. (0.5)
The first integral in (0.5) is simply
t
0
Λs(λij
s
−λijs )ds.
Because Y t is a purely discontinuous process and of boundedvariation the second integral in (0.5) is equal to
0<s≤t e t0(λijs −λ
ijs )ds∆Y s.
In expression (0.5) we havef (s, Y s) − f (s, Y s−) = Λs − Λs−
= e t0(λijs −λ
ijs )ds
i= j
r≤s−
λij
r
λijr
∆J ijr
λij
s
λijs
∆J ijs
− r≤s−λij
r
λijr
∆J ijr
= Λs−i= j λij
s
λijs
∆J ijs
−1
=i= j Λs−
λij
s
λijs− 1
∆J ijs .
Putting all these results together gives (0.5). For the uniquenesssee the reference in Example 3.6.11.
Problem 7. Prove Lemma 4.7.3.
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 23/31
23
Solution:
By the characterization theorem of Poisson processes (see Theo-
rem 3.6.14) we must show that M ijt
= J ijt −λij
t and (M ijt )2−λij
tare (P , F t)-martingales.
By Bayes’ Theorem, for t ≥ s ≥ 0
E [M ijt | F s] = E [ΛtM ijt | F s]E [Λt | F s] =
E [ΛtM ijt | F s]Λs
.
Therefore, M ijt is a (P , F t) martingale if and only if ΛtM ijt is a(P, F t)-martingale. Now
ΛtM ijt = t0
Λs−dM ijs + t0
M ijs−dΛs + [Λ, M ij ]t.
Recall
[Λ, M ij]t =o<s≤t
∆Λs∆M ijs
=o<s≤t
∆
t0
Λs−(λij)−1(λij − λij
)dJ ijs
∆J ijs
=− t
0
Λs−
(λij)−1(λij
−λij
)d[J ij ,
J ij ]s
= − t0
Λs−(λij)−1(λij − λij
)dJ ijs .
Therefore
ΛtM ijt =
t0
Λs−(dJ ijs −λij
ds)+
t0
M ijs−dΛs− t0
Λs−(λij)−1(λij−λij
)dJ ijs .
(0.6)The second integral on the right of (0.6) is a (P, F t) martingale.(Recall that
J ij
t −λijt is a (P,
F t) martingale). The other two
integrals are written as
t0
Λs−(dJ ijs − λij
ds) =
t0
Λs−(dJ ijs − λijds + λijds − λij
ds)
=
t0
Λs−(dJ ijs − λijds) +
t0
Λs−λijds − t0
Λs−λij
ds, (0.7)
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 24/31
24
and
t0 Λs−(λ
ij
)
−1
(λ
ij
− λ
ij
)dJ ij
s = t0 Λs−(λ
ij
)
−1
(λ
ij
− λ
ij
)(dJ ij
s − λ
ij
ds + λ
ij
ds)
=
t0
Λs−(dJ ijs − λijds) +
t0
Λs−(λij)−1(λij − λij
)λijds. (0.8)
Substituting (0.7) and (0.8) in (0.6) yields the desired result and
it remains to show (M ijt )2 − λij
t is also a (P , F t) martingale.Now
(M ij)2t = 2
t0
M ijs−dM ijs + [M ij , M ij ]t = 2
t0
M ijs−dM ijs + J ijt . (0.9)
Subtracting λij
t from both sides of (0.9) makes the last term
on the right of (0.9) a (P , F t)-martingale and since the dM ijintegral is a (P , F t)-martingale the result follows.
Chapter 5
Problem 1. Assume that the state and observation processes of a systemare given by the vector dynamics (5.4.1) and (5.4.2). For m,k ∈ IN, m < k , write the unnormalized conditional density suchthat
E [ΛkI (X m ∈ dx) | Y k] = γ m,k(x)dx.
Using the change of measure techniques described in Section5.3, show that
γ m,k(x) = αm(x)β m,k(x),
where αm(x) is given recursively by (5.4.6). Show that
β m,k(x) = E [Λm+1,k | X m = x, Y k]
=1
φ(ym+1)
Rm
φm+1(Y m+1 − C m+1z)
×ψm+1(z − Am+1x)β m+1,k(z)dz. (0.10)
Solution:
For an arbitrary integrable function f : Rm
→ R writeE [Λkf (X m) | Y k] =
Rm
f (x)γ m,k(x)dx.
However,E [Λkf (X m) | Y k]= E [Λ1,mf (X m)E [Λm+1,k | X 0, . . . , X m, Y k] | Y k].
Let E [Λm+1,k | X m = x, Y k]= β m,k(x).
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 25/31
25
Consequently,E [Λkf (X m) | Y k] = E [Λ1,mf (X m)β m,k(xm) | Y k].
Therefore Rm
f (x)γ m,k(x)dx =
Rm
f (x)αm(x)β m,k(x)dx
from which the result follows.Nowβ m,k(x) = E [Λm+1,k | X m = x, Y k]= E [λm+1Λm+2,k | X m = x, Y k]= E [λm+1E [Λm+2,k | X m = x, X m+1Y k] | X m = x, Y k]
= E [φ(D−1
m+1(Y m+1 − C m+1X m+1))
|Dm+1|φ(Y m+1)
ψ(B−1m+1(X m+1 − Am+1x))
|Bm+1|ψ((X m+1)β m+1,k(X m+1) | X m = x, Y k].
Now recall that under P , X m+1 is normally distributed andindependent of the Y process. Therefore
β m,k(x) =
Rm
φ(D−1m+1(Y m+1 − C m+1z)
|Dm+1|φ(Y m+1)
ψ(B−1m+1(z − Am+1x))
|Bm+1|ψ((z)β m+1,k(z)ψ(z)dz,which is the result.
Problem 3. Assume that the state and observation processes are given bythe vector dynamics
X k+1 = Ak+1X k + V k+1 + W k+1 ∈ Rm,
Y k = C kX k + W k ∈ Rm.
Ak, C k are matrices of appropriate dimensions, V k and W k arenormally distributed with means 0 and respective covariancematrices Qk and Rk, assumed non singular. Using measurechange techniques derive recursions for the conditional meanand covariance matrix of the state X given the observations Y .
Solution:
Notice the noise W appears in both signal and observationsprocesses. To circumvent this write:
X k+1 = (Ak+1
−C k+1)X k + V k+1 + Y k+1, (0.11)
Y k = C kX k + W k. (0.12)
Then, we assume that we start with a reference measure P un-der which X k and Y k are two independent sequences of randomvariables normally distributed with means 0 and covariance ma-trices the identity matrix I m×m. With ψ and φ denoting stan-dard normal densities, define the positive mean one martingale
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 26/31
26
sequence :
Λk = k
m=0
φ(R−1m+1(Y m+1 − C m+1X m+1))
|Rm+1|φ(Y m+1)
× ψ(Q−1m+1(X m+1 − Am+1X m − Y m+1))
|Qm+1|ψ(X m+1).
Now define the ‘real world’ measure P by setting the restriction
of dP
dP to Gk = σX 0, . . . , X k, Y 0, . . . , Y k equal to Λk.
It can be shown that under the measure P the dynamics (0.11)and (0.12) hold.
The remaining steps are standard and are left to the reader.Problem 4. Let m = n = 1 in (5.8.1) and (5.8.2). The notation in Section
5.8 and Section 5.9 is used here.
Let Γt be the process defined as
Γt =
t0
x psds, p = 1, 2, . . . .
Write
E [ΛtI (Γt∈dx) | Y t] = µt(x)dx.
Show that at time t, the density µt(x) is completely described bythe p + 3 statistics st(0), st(1), . . . , st( p), Σt, and mt as follows:
µt(x) = pi=1
st(i)
q(x, t), (0.13)
where s0(i) = 0, i = 1, . . . , p, and
dst( p)
dt= − p(At + Σ−1
t B2t )st( p) + 1,
dst( p − 1)
dt= −( p − 1)(At + Σ−1
t B2t )st( p − 1) + pst( p)Σ−1
t B2tmt,
dst(i)
dt= −i(At + Σ−1
t B2t )st(i) +
1
2(i + 1)(i + 2)st(i + 2)
+ (i + 1)st(i + 1)Σ−1t B2t , i = 1, . . . , p − 2,dst(0)
dt= B2
t st(2) + Σ−1t st(1)mt.
(0.14)
Solution:
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 27/31
27
ΛtΓtg(xt) =
0
t
ΛsΓsg(xs)Asxsds +
0
t
ΛsΓsg(xs)Bsdws
+ (1/2)
0
t
ΛsΓs 2 g(xs)B2sds +
0
t
ΛsΓs g(xs)xsC sD−2s dys
+
0t
Λsg(xs)x psds.
Conditioning on Y t, t0
µt(x)g(x)dx =
t0
R
µs(x) g(xs)Asxdxds
+ (1/2)
0t
R
µs(x) 2 g(xs)B2sdxds
+ t
0 R
µs(x)g(x)xC sD−2s dxdys
+
t0
R
x pg(x)αs(x)dxds.
Integrating by part in x, we that µt(.) satisfies the stochasticpartial differential equation
µt(x) = − t0
(d/dx)(µs(x)Asx)ds+(1/2)
t0
(d2/dx2)µs(x)B2sds
+
t0
µs(x)xC sD−2s dys
+ t
0
x pαs(x)ds.
It can be verified that (0.13) is a solution to the above equa-tion if the time-varying coefficients at(0), . . . , at( p) satisfy theordinary differential equations (0.14).
Problem 5. Give a detailed proof of Lemma 5.7.1.
Solution:
Let us recall
λk+1 =φ(D−1
k+1(Y k+1 − C k+1X k+1))
|Dk+1|φ((Y k+1)
ψ(B−1l (X k+1 − Ak+1X k))
|Bk+1|ψ((X k+1).
Suppose f, g : Rm → R are “test” functions (i.e. measur-able functions with compact support). Then with E (resp. E )
denoting expectation under P (resp. P ) and using Bayes’ The-orem
E [f (V k+1)g(W k+1) | Gk] =E [Λk+1f (V k+1)g(W k+1) | Gk]
E [Λk+1 | Gk]= E [λk+1f (V k+1)g(W k+1) | Gk],
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 28/31
28
Consequently
E [f (V k+1
)g(W k+1
)| Gk
] = E [λk+1
f (V k+1
)g(W k+1
)| Gk
]
= E
φ(D−1
k+1(Y k+1 − C k+1X k+1))
|Dk+1|φ((Y k+1)
ψ(B−1k+1(X k+1 − Ak+1X k))
|Bk+1|ψ((X k+1)
× f (B−1k+1(X k+1 − Ak+1X k))g(D−1
k+1(Y k+1 − C k+1X k+1)) | Gk
= E
ψ(B−1
k+1(X k+1 − Ak+1X k))
|Bk+1|ψ((X k+1)f (B−1
k+1(X k+1 − Ak+1X k))
× E
φ(D−1
k+1(Y k+1 − C k+1X k+1))
|Dk+1|φ((Y k+1)g(D−1
k+1(Y k+1 − C k+1X k+1)) | Gk, X k+1
| Gk
.
Now
E
φ(D−1
k+1(Y k+1 − C k+1X k+1))
|Dk+1|φ((Y k+1)g(D−1
k+1(Y k+1 − C k+1X k+1)) | Gk, X k+1
=
Rm
φ(D−1k+1(y − C k+1X k+1))
|Dk+1|φ(y)g(D−1
k+1(y − C k+1X k+1))φ(y)dy
=
Rm
φ(u)g(u)du,
after an appropriate change of variable. Similar calculationsshow that
E [f (V k+1)g(W k+1) | Gk] = Rm
φ(u)f (u)du Rm
ψ(u)g(u)du,
which finishes the proof.
Problem 6. Prove (5.7.5), (5.7.6), (5.7.7) and (5.7.3).Solution:
We have to show that
αk+1(x) = Ψk+1(x, Y k+1)
Rm
Φ(x, z)αk(z)dz.
(0.15)
Here
Ψk+1(x, Y k+1) =ψ(D−1
k+1(Y k+1 − C k+1x))
|Bk+1||Dk+1|ψ(Y k+1)(0.16)
and
Φ(x, z) = φ(B−1k+1(x − Ak+1z)). (0.17)
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 29/31
29
For any “test” function g
Rm g(x)αk+1(x)dx = E [Λk+1g(X k+1) | Y k+1]
= E [Λkλk+1g(X k+1) | Y k+1]
= E
Λk
φ(B−1k+1(X k+1 − Ak+1X k))ψ(D−1
k+1(Y k+1 − C k+1X k+1))
|Bk+1||Dk+1|φ(X k+1)ψ(Y k+1)
× g(X k+1) | Y k+1]
=1
|Bk+1||Dk+1|ψ(Y k+1)
×E Λk Rmφ(B−1
k+1(x − Ak+1X k))ψ(D−1k+1(Y k+1 − C k+1x))
φ(x)
× φ(x)g(x)dx | Y k+1] .
The last equality follows from the fact that X k+1 has distribu-tion φ and is independent of everything else under P . Also,note that given Y k+1 we condition only on Y k to get
Rm
g(x)αk+1(x)dx =1
|Bk+1||Dk+1|ψ(Y k+1)
× Rm
Rm
φ(B−1k+1(x − Ak+1z))ψ(D−1
k+1(Y k+1 − C k+1x))
×g(x)αk(z)dxdz.
This holds for all “test” functions g so we can conclude that(0.15) holds.
Let S k =km=1
X m−1 ⊗ X m−1,
S ijk =km=1
X m−1, eiX m−1, e j and write
β ijk (x) = E [ΛkS ijk I (X k ∈ dx) | Y k].
E [ΛkS ijk g(X k)
| Y k] = Rm β ijk (x)g(x)dx
β k+1(x) = Ψk+1(x, Y k+1)
RmΦ(x, z)β k(z)dz
+ Rm
z, eiz, e jαk(z)Φ(x, z)dz
We have to show that.β k+1(x) = Ψk+1(x, Y k+1)
Rm
Φ(x, z)β k(z)dz
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 30/31
30
+
Rm
z, eiz, e jαk(z)Φ(x, z)dz
.
Note that S ijk+1
= S ijk
+X k
, ei
X k
, e j
, therefore Rm
β ijk+1(x)g(x)dx
= E [Λk+1S ijk g(X k+1) | Y k+1]+ E [Λk+1X k, eiX k, e jg(X k+1) | Y k] =
= E [λk+1ΛkS ijk g(X k+1) | Y k+1]+ E [λk+1ΛkX k, eiX k, e jg(X k+1) | Y k].The remaining steps are the by now familiar calculation so theyare skipped.
Problem 11. Give the proof of theorem 5.11.4.Solution:
Here the dynamics are given by
dxt = g(xt)dt + s(xt)dBt,
dyt = h(xt)dt + α(yt)dW t.
We have to show that if φ ∈ C 2(Rd) is a real-valued functionwith compact support. Then
σ(φ)t = σ(φ)0 + t
0
σ(Aφ)sds + t
0
σ[(
φ(xs))s(xs)ρα(ys)
−2h(xs)]ds
+
t0
(α−1(ys)σ(φh))α−1(ys)dys,
where Aφ(x) =1
2
di,j=1
(ss)ij(xs)∂ 2φ(xs)
∂xi∂x j+
di=1
gi(xs)∂φ(xs)
∂xi.
Using the vector Ito rule we establish
φ(xt) = φ(x0) + t
0
Aφ(xs)ds + t
0
(
φ(xs))s(xs)dws (0.18)
Recall that
Λt = exp
t0
(α(ys)−1h(xs))α(ys)−
1dys − 1
2
t0
|α(ys)−1h(xs)|2ds
.
andΛt = 1 +
t0
Λs(α(ys)−1h(xs))α(ys)−
1dys
8/4/2019 Measure, Problems and Solutions
http://slidepdf.com/reader/full/measure-problems-and-solutions 31/31
31
using the Ito product rule
Λtφ(xt) = φ(x0) + t0 ΛsAφ(xs)ds +
t0 Λs(φ(xs))
Bsdws
+
t0
Λsφ(xs)(α(ys)−1h(xs))α(ys)
−1dys
+ [Λ, φ]t (0.19)
. (0.20)
But[Λ, φ]t = [
t0
Λs(α(ys)−1h(xs))α(ys)
−1dys, t0
(φ(xs))s(xs)dws]
= t0
Λs(φ(xs))s(xs)ρα(ys)−2h(xs)ds.
Conditioning both sides of (0.20) on Y t gives the result.