Παρουσίαση του powerpointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3...

38
1 «Random Processes» Lecture #2: Random Vectors Andreas Polydoros

Upload: others

Post on 22-Jun-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

1

«Random Processes»

Lecture #2: Random Vectors

Andreas Polydoros

Page 2: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

2

Introduction

Contents:Definitions: Correlation and Covariance matrixLinear transformations: Spectral shaping and factorizationThe whitening conceptThe Karhunen-Loeve expansion

Page 3: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

3

Introduction

Definition-Correlation and Covariance matrix:

Random vectors come about either by ‘sampling’ one random process at or by ‘observing’ a number of processes at the same time. Essentially the two ways are equivalent mathematically.

( ),X u t 1 2, , , Nt t t…

( ) ( ) ( )1 2, , , , , ,NX u t X u t X u t…

( )( )

( )( ){ }

( )

( )

,1 1,

,

X

X

X u mX u X u

X u N m N

⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

E

Page 4: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

4

Introduction

The autocorrelation function is:( ) ( ){ }( )

( )( ) ( )

( ) ( ) ( )( ) ( ) ( )

( ) ( ) ( )

*

* *

,1,1 , , ,

,

1,1 1,2 1,2,1 2,2 2,

,1 , 2 ,

TX

X X X

X X X

X X X

R X u X u

X uX u X u N

X u N

R R R NR R R N

R N R N R N N

⎧ ⎫⎡ ⎤⎪ ⎪⎪ ⎪⎢ ⎥⎪ ⎪⎪ ⎪⎡ ⎤⎢ ⎥= ⎨ ⎬⎣ ⎦⎢ ⎥⎪ ⎪⎪ ⎪⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎪ ⎪⎩ ⎭⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥= ⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

E

E …

Page 5: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

5

Introduction

The covariance matrix is:

( )( ) ( )( ){ }( ) ( ){ } ( ){ } ( ){ }

*

* * * * *

TX X X

T T T T TX X X X

K X u m X u m

X u X u m X u X u m m m

− −

= − − +

E

E E E

*TX X X XK R m m⇒ = −

Page 6: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

6

Linear transformations

Suppose we are given a random vector and we construct another random vector through the linear transformation

( )X u( )Y u

( ) ( )

1

; 1,2, ,N

m mn nn

Y u HX u

y h x m M=

=⎛ ⎞⎟⎜ = = ⎟⎜ ⎟⎟⎜⎝ ⎠∑ …

Question: What is the second-moment description of ?( )Y u

Page 7: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

7

Linear transformations

( ){ }( ){ }

( ){ }

( ){ }

( ){ }

11 1

1

,1 ,1

, ,

N

Y

M MN

Y u X uh hm Y u

Y u M h h X u N

⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥= = = ⋅⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎣ ⎦ ⎣ ⎦

E E

EE E

Y Xm Hm⇒ =

In the above derivation we claimed that:

( ) ( ){ }?

1 1

, ,N N

mn mnn n

h X u n h X u n= =

⎧ ⎫⎪ ⎪⎪ ⎪=⎨ ⎬⎪ ⎪⎪ ⎪⎩ ⎭∑ ∑E E

In others words we assumed that expectation and summation can be interchanged, but this holds only if gf is finite. ( )1, 2,XR t t

Page 8: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

8

Linear transformations

For the autocorrelation function of :( )Y u

( ) ( ){ }( )( ) ( )( ){ }( ) ( ){ }( ) ( ){ }

*

*

* *

* *

TY

T

T T

T T

R Y u Y u

HX u HX u

HX u X u H

H X u X u H

=

=

=

=

E

E

E

E

*TY XR HR H⇒ =

Page 9: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

9

“White” vectors

A useful concept is that of a “white” vector , which is a random vector with mean , and covariance matrix:

( )W u0Wm =

2

2

0 0 00 0 0σ

σ2

2

0 0 00 0 0

W WR K Iσ

σ

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥= = = ⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

where is a constant and is the identity matrix. This means that all components of are uncorrelated with each other, with zero mean and variance .

σiw

I( )W u

Page 10: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

10

Spectral Shaping

Problem: Given the white vector , can we find a linear transformation such that the resultant vector has given mean and given covariance matrix ?

( )W u

Xm XK( ) ( )X u HW u=

( )W u ( ) ( )X u HW u=

Page 11: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

11

Spectral Shaping

Since, , it follows that

The covariance matrix of is:

Therefore, spectral shaping is equivalent to the following:

0Wm =

0X Wm Hm= =

( )X u*

2 *

TX W

T

R HR HHHσ

==

Given a correlation matrix , find an such that XR H *TXR HH=

Note: can be absorbed in the given by creating a “new” given . Other names for this problem are “matrix factorization”, “square root of a matrix”

2σ XR XR′

Page 12: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

12

Spectral Shaping

Definition: A complex (real) matrix is called Hermitian symmetric iff:

A

*TA A=

Definition: A complex (real) matrix is called Unitary (orthogonal) iff:

A

*TAA I=

Page 13: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

13

Spectral Shaping

Theorem: if is Hermitian symmetric then there exists a unitary matrix such thatE

*TK E EΛ=1

2

Ν

λλΛ

λ

⎡ ⎤⎢ ⎥⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦

K

with the eigenvalues of (not necessarily distinct). In other words:

Hermitian symmetric matrices are always diagonalizable.

K; 1, 2, , ,n n Nλ = …

Theorem: A necessary and sufficient condition for such a to be nonnegative definite is that 0; 1,2, , .n n Nλ ≥ = …

K

Page 14: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

14

Spectral Shaping

Theorem: Let be Hermitian symmetric. Then for each distinct (simple) eigenvalue there corresponds an eigenvector which is orthogonal (orthonormal) to all others .To each eigenvalue of multiplicity k there correspond k linearly independent eigenvectors, which are orthogonal to all eigenvectors of the rest eigenvalues. These k eigenvectors can be made orthogonal by application of the Gram-Schmidt procedure

In summary, every Hermitian (N×N) matrix has N orthonormaleigenvectors , associated with its N eigenvalues . In fact, matrix consists of these `s as its columns, i.e.,

K

{ } 1

Nn n

e=

E ne

[ ]1 2| | | NE e e e=

{ } 1

Nn n

λ=

Page 15: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

15

Spectral Shaping

Returning to the factorization problem, we want to find an such that . Writing (since is Hermitian) we have

H*T

XR HH=

( )( )( )

*

*

1 2 1 2 *

*1 2 1 2 *

*1 2 1 2

T

TX

T

T T

T

H H

R E EE E

E E

E E

ΛΛ Λ

Λ Λ

Λ Λ

==

=

=

1

21 2;

N

λ

λΛ

λ

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

*TXR E EΛ= XR

Page 16: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

16

Spectral Shaping

We have arrived at a solution where

However this solution is not unique. To see this, take any unitary matrix and observe that:

1 2H EΛ=

U

( )( )( ) ( )( )( )

** 1 2 1 2

*1 2 * 1 2

*1 2 1 2

another

TTX

TT

T

H

R HH E E

E UU E

E U E U

Λ Λ

Λ Λ

Λ Λ

= =

=

=

Page 17: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

17

Spectral Shaping

Sometimes we take and the resulting is given as:

This matrix is often called the “square root” of From an applications viewpoint this factorization is useful in simulation, i.e., creating a random vector with desired correlation properties, starting from a “random number generator”.

Note: if then the appropriate linear transformation is:

where the factorization is done on , not on .

*TU E= H1 2 1 2 *TH E U E EΛ Λ= =

XR

0Xm ≠

XX HW m= +XK XR

Page 18: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

18

Spectral Shaping

Example:The required covariance matrix is:

1 1 2 1 21 2 1 1 21 2 1 2 1

XK⎡ ⎤− −⎢ ⎥⎢ ⎥= − −⎢ ⎥⎢ ⎥− −⎣ ⎦

The eigenvalues are found by solving the characteristic equation:

{ }det 0; 1,2,3X nK I nλ− = =

1 2 30, 3 2λ λ λ⇒ = = =

Page 19: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

19

Spectral Shaping

Solving for the corresponding eigenvectors we get:

1 1

110 13 1

eλ⎡ ⎤⎢ ⎥⎢ ⎥= ⇒ = ⎢ ⎥⎢ ⎥⎣ ⎦

2 2

3

13 1 12 2 1

1 22 1 2 3

1

e

e

λ⎡ ⎤⎢ ⎥⎢ ⎥= ⇒ = −⎢ ⎥⎢ ⎥⎣ ⎦⎡ ⎤⎢ ⎥⎢ ⎥= ⎢ ⎥⎢ ⎥−⎣ ⎦

Page 20: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

20

Spectral Shaping

Therefore, we could choose the linear transformation:

[ ]1 21 2 3

0 0 0 0 3 2 1 2| | 0 3 2 0 0 3 2 1 2

0 0 10 0 3 2H E e e eΛ

⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥= = = −⎢ ⎥⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

( )( )( )

0 3 2 1 2 ,10 3 2 1 2 ,20 0 1 ,3

W uX HW W u

W u

⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥= = − ⋅⎢ ⎥ ⎢ ⎥⎢ ⎥− ⎢ ⎥⎢ ⎥ ⎣ ⎦⎣ ⎦

Notice that does not depend on . X ( ),1W u

Page 21: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

21

Spectral Shaping

In the above, we solved the problem of spectral shaping which is equivalent to a covariance matrix factorization. The solution was unconstrained, i.e., we imposed no restrictions on the nature of the linear transformation

Now assume that we impose the constraint of the linear transformation being causal.

H

Page 22: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

22

Spectral Shaping

Definition: A causal linear transformation is equivalent to being lower triangular, i.e., the wanted linear transformation is

The problem can now be restated as

Find a lower-triangular matrix such that: • Note: This factorization is called the “Cholesky factorization” of positive

definite matrices

H

( )

( )

( )

( )

11

21 22

1 2

0 0,1 ,1

0, ,

X

N N N N

hX u W u

h hm

X u N W u Nh h h

⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥= ⋅ +⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦

( ) ( )1

, , ; 1, 2, ,n

nll

X u n h W u l n N=

⎛ ⎞⎟⎜ = = ⎟⎜ ⎟⎟⎜⎝ ⎠∑ …

H *TXK HH=

Page 23: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

23

Spectral Shaping

Example (real-valued covariance matrix):

in the same manner we can find the rest of .

11 12 1 11 11 21 1

21 22 2 21 22 22 2

1 2 1 2

0 00 0

0 0

N N

N N

N N NN N N NN NN

k k k h h h hk k k h h h h

k k k h h h h

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= ⋅⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦

211 11 11 11

12 11 21 21 12 11

k h h kk h h h k h= ⇒ =±= ⇒ =

ijh

Page 24: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

24

Properties - Spectral Resolution

Assume a real covariance matrix . We can rewrite the factorization as

Τhe set of N eigenvectors constitutes a basis for the N-dimensional vector space

XKT

XK E EΛ=

[ ]1

21 1 2 2| | |

T

T

X N N

TN

eeK e e e

e

λ λ λ

⎡ ⎤⎢ ⎥⎢ ⎥= ⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

1

NT

X n n nn

K e eλ=

=∑or

This shows that can be decomposed (resolved) into a sum of N matrices, each of the form with weight .

XKT

n ne e nλ{ } 1

Nn n

e=

Page 25: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

25

Properties - Spectral Resolution

Every deterministic vector can be expanded into a series

where is the projection of on the basis vector

Thus, vector can be described in terms of its “projections”dcdc along the coordinates

A

1

N

n nn

A a e=

=∑, T

n n na A e A e= =

A{ }na { } 1

Nn n

e=

Ane

Page 26: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

26

Properties - Spectral Resolution

It is clear that we can create random vectors by choosing these projections as random variables , i.e.,

Note: If the eigenvectors have the form

with 1 in the n-th position, then

( ){ }nA u

( )1

N

n nn

A A u e=

=∑

[ ]0,0, ,0,1,0,0, ,0 Tne = … …

( )

( )

1

N

A uA

A u

⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦

Page 27: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

27

Properties - Directional Preference

Suppose we are given the covariance matrix of some vector and would like to project this vector on some unit-length vector ( ). The projection is the inner product:

Assuming that , the variance of equals

i.e., the variance of is a quadratic functional of the `s

XK( )X u

b 21

1Nnn

b==∑

( ) ( ) ( ), TY u X u b X u b= =

0Xm = ( )Y u

( ){ } ( ){ } ( ) ( ){ } ( ) ( ){ }2 2var T TY

TX

Y u Y u Y u Y u b X u X u b

b K b

σ= = = =

=

E E E

( )Y u { }nb

Page 28: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

28

Properties - Directional Preference

“Directional preference” translates to finding those directions where the variance is highest (or lowest). This is an optimization problem where we want to maximize the above quadratic form, subject to the unit-norm constraint. To solve this, we expand on the orthonormalbasis , i.e.,

so that . The quadratic form can now be written as:

b 2 TY Xb K bσ =

b{ } 1

Nn n

e=

1

N

n nn

b b e=

=∑

2

1 1

TN NT

Y X n n X m mn m

b K b b e K b eσ= =

⎛ ⎞ ⎛ ⎞⎟ ⎟⎜ ⎜= = ⎟ ⎟⎜ ⎜⎟ ⎟⎟ ⎟⎜ ⎜⎝ ⎠ ⎝ ⎠∑ ∑

21

1Nnn

b==∑

1 1

N NT

n m n X mn m

b b e K e= =

=∑∑

Page 29: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

29

Properties - Directional Preference

Recalling that , can be written as

Now the original problem can be equivalently stated as follows:

Let . We want to maximize subject to the constraint and

X m m mK e eλ= 2Yσ

2

1 1 1 1 nm

N N N NT T

Y n m n m m m n m n mn m n m

b b e e b b e eδ

σ λ λ= = = =

= =∑∑ ∑∑

2 2

1

N

Y n nn

bσ λ=

⇒ =∑

{ } { }2n nu b1

Nn nn

U uλ=

=∑1

1Nnn

u==∑ 0, 0i iu λ≥ ≥

Page 30: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

30

Properties - Directional Preference

Example (N = 2):

For the optimal solution is . The general solution is to choose where and for . Since , it follows that:

2 1λ λ> 1 20, 1u u= =1mu = { }maxm nλ λ=

0nu = n m≠ 2 1; 1, 2, ,ib i N= = …

10;

m

n

bb n m=±= ≠

Page 31: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

31

Properties - Directional Preference

The resulting variance is the maximum eigenvalue

Recalling that it follows that

where is the eigenvector of corresponding to the largest eigenvalue

Question: What is the direction that minimizes the variance?

{ }2 maxY m nσ λ λ= =

1

N

n nn

b b e=

=∑max maxb e=

maxe XK

Page 32: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

32

The whitening concept

Converse to the factorization or spectral shaping problem

Problem statement: Given a random vector with some mean and covariance , find a linear transformation such that the output is a white vector

Xm XKG

( )X u

0Xm =

0Xm ≠

( )W u

( )W u( )X u

Xm

( )X u( )W u

Page 33: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

33

The whitening concept

From previous theory we know, that the covariance matrix of the “output vector” is

For to be white we require

We also know that can be factorized as (assuming real matrices)

Thus, we require the following equality to hold:

TW XK GK G=

( )W u WK I=

TXK HH=

( )( ) T T

T

GHH G I

GH GH I

=

⇒ =

( )W u

XK

Page 34: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

34

The whitening concept

The simplest form of that satisfies this equality is

However, since , we can express in terms of E and as

Recalling that is by definition an arbitrary unitary matrixand is also unitary since its columns are the orthonormaleigenvectors of , we end up at

1G H −=1 2H E UΛ=

G

E ΛG

( ) 11 1 2

1 1 2 1

G H E U

U E

Λ

Λ

−−

− − −

= =

=

UE

XK1 2T TG U Λ Ε−=

Page 35: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

35

The Karhunen-Loeve expansion

Starting from the coloring problem equation, we define the following random vectors

Claim: Vector is also white, and vector ghj gj has uncorrelated components, each with a different variance

( ) ( )( ) ( )

( ) ( )1 2

1 2

uY u UW

Z u Y u

X u E UW u

Λ

Λ=

( ) ( )Y u UW u=( ) ( )1 2Z u Y uΛ−=

Page 36: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

36

The Karhunen-Loeve expansion

Proof: Using the standard formulas we obtain

( )0

is whiteY X

T TY W

m UmY u

K UK U UU I

⎫= = ⎪⎪⇒⎬⎪= = = ⎪⎭

( )( ){ }

1 2

1 2 1 2

1 2 1 2

1

0

var ; 1, 2, , 0

0

Z Y

TZ Y

n n

N

m m

K K

Z u n NI

Λ

Λ Λ

λΛ Λ Λλ

λ

⎫⎪= = ⎪⎪⎪⎪= ⎪⎪⎪⎪⇒ = =⎬= = ⎪⎪⎪⎡ ⎤ ⎪⎢ ⎥ ⎪= ⎪⎢ ⎥ ⎪⎢ ⎥ ⎪⎣ ⎦ ⎪⎭

Page 37: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

37

The Karhunen-Loeve expansion

Rewriting the coloring problem equation as we have:

This is the Karhunen – Loeve expansion of . It states that every random vector can be written as a sum of orthonormal eigenvectors , each weighted by a random variable and further scaled by

( ) ( )X u EZ u=

( ) [ ]( )

( )

1

1 2| | | N

N

Z uX u e e e

Z u

⎡ ⎤⎢ ⎥⎢ ⎥= ⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

( ) ( )1

N

n nn

X u Z u e=

⇒ =∑

( ) ( )1

N

n n nn

X u W u eλ=

=∑⇒

( )X u

{ }ne( )nW u nλ

Page 38: Παρουσίαση του PowerPointwireless.phys.uoa.gr/docs/rp/rp_lecture2.pdf · 3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either

38

The Karhunen-Loeve expansion

Note that the Karhunen – Loeve expansion of a random vector is simply an expansion on a certain basis ( ) of the N-dimensional vector space. However, the basis is special, since (as we just showed) the projections are uncorrelated random variables with variance . (Projecting on an arbitrary basis, would not have the same effect)

One could say that a random vector has preferences into how it is going to be distributed in space!

( )X u { }ne

( ), nX u enλ