signal space

27
An Overview of Vector Spaces EEL718 Statistical Signal Processing Prof. Shankar Prakriya Indian Institute of Technology Delhi January 15, 2014 Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processi

Upload: battihome

Post on 21-Jul-2016

28 views

Category:

Documents


0 download

DESCRIPTION

signal space

TRANSCRIPT

Page 1: Signal Space

An Overview of Vector SpacesEEL718 Statistical Signal Processing

Prof. Shankar Prakriya

Indian Institute of Technology Delhi

January 15, 2014

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 2: Signal Space

Contents

General Motivation

Introduction to Metric Spaces

Introduction to Vector Spaces, Inner Products and Norms -Hilbert and Banach Spaces

Projections

Geometric Notions for Random Variables

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 3: Signal Space

General Motivation I

Geometric Notions

In 3D geometry, we are used to notions of unit-length basisvectors, dimensionality, angle between vectors, dot product,etc. The need for orthogonal and unit-length basis vectors isreadily apparent (extension to N length vectors).

From DSP, we can see that these notions extend to signals -we use orthogonal basis signals of unit energy to representsignals.

In communications, an optimal receiver projects the receivedsignal on a space spanned by the basis of the modulatedsignals - the same geometric notions as with vectors are used.

These notions extend to random variables too, and will play apivotal role in statistical signal processing.

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 4: Signal Space

Familiar Notions From Geometry I

3D Geometry

”Basis vectors” i1, i2 and i3 ”span” any point in 3D, that isa = a1i1 + a2i2 + a3i3

It is sufficient if basis vectors are linearly independent, that is,3∑

k=1

ak ik = 0 iff a1 = a2 = a3 = 0.

What happens when i1, i2 and i3 are not independent?

Why do we prefer to use i1, i2 and i3 to be orthogonal? Whenthey are orthogonal, the ”Grammian matrix” is diagonal.When they are of unit length & orthogonal, the ”Grammianmatrix” is an identity matrix.

Notions generalized to vectors of larger dimensions.

Why do we write one vector as the sum of other vectors?

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 5: Signal Space

Familiar Notions From Geometry Contd. I

3D Geometry

”Dot Product” of a = a1i1 + a2i2 + a3i3 and

b = b1i1 + b2i2 + b3i3 denoted by a.b or 〈a,b〉 =3∑

k=1

akbk

〈a,b〉 = ||a|| ||b|| cos(θ) where θ is the angle between thevectors.

Implies that |〈a,b〉| ≤ ||a|| ||b|| with equality holding onlywhen a and b are colinear

What is the physical meaning of ||a|| cos(θ)? It is thecomponent of a along b.

What is the physical meaning of a− ||a|| cos(θ)b

||b||?

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 6: Signal Space

Familiar Notions From Geometry Contd. I

3D Geometry

With a = a1i1 + a2i2 + a3i3, what is the vector in the spacespanned by i1 and i2 that is closest to a?

The above is same as ”projection” of a on ”subspace”spanned by i1 and i2.

Having an orthogonal basis for the space onto which thevector is to be projected is useful.

||a||2 = 〈a, a〉 =3∑

k=1

|ak |2 is the squared norm of the vector

(squared distance from origin).

Is the basis for the space unique? Does the angle between aand b depend on the choice of basis? Do the length of thevectors change?

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 7: Signal Space

Metric Spaces I

The notion of metric spaces will be very useful to understandnorms, inner products and other important concepts.

Metric Spaces - Definition

A ”Metric” d: X × X → R is a function that measures thedistance between elements in set X.Properties of a Metric

1 d(x , y) = d(y , x)

2 d(x , y) ≥ 0

3 d(x , y) = 0 iff x = y

4 d(x , z) ≤ d(x , y) + d(y , z) triangle inequality

”A metric space (X,d) is a metric d(x,y) together with a set X.”

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 8: Signal Space

Metric Spaces II

Commonly Used Metrics

For M × 1 vectors x and y

d1(x, y) =∑M

i=1 |xi − yi |

d2(x, y) =(∑M

i=1 |xi − yi |2) 1

2

dp(x, y) =(∑M

i=1 |xi − yi |p) 1

p

d∞(x, y) = maxMi=1 |xi − yi |

dp is referred to as ”Minskowski” distance. The metric to usedepends on the application, and ease of use - d2(.) is used mostoften because of its desirable properties. A very large number ofmetrics are in use - a list of statistical and other distance measurescan be found in wikipedia for example.

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 9: Signal Space

Metric Spaces III

Examples

A quantizer that maps vector x to x uses d1(.) or d2(.)

To measure the distance between binary codewords, one basedon the hamming distance dH can be used:

dH(x, y) =M−1∑i=0

xi ⊕ yi (modulo − 2)

Does it satisfy all the requirements of a metric?

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 10: Signal Space

Metric Spaces IV

Some Other Distance Measures

d(x, y) = maxi (|xi − yi |) Chebyshev

d(x, y) =

∑i xiyi

||x||2||y||2Cosine correlation

d(x, y) =∑i

|xi − yi ||xi + yi |

Canberra Are all of them metrics?

(1)

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 11: Signal Space

Metric Spaces V

Signal Representation

In signal representation, d2(.) is often used (Root mean-squarederror).The Fourier series representation of

x(t) = sin(2πkt/T )

y(t) =

{sin(2πkt/T ) t 6= T/33 t = T/3

will be exactly the same! Convergence is in the mean-square sense.Convergence issues arise in description of the Gibbs Phenomenon

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 12: Signal Space

Vector Spaces I

Linear Vector Space

Linear vector space S over aa set of scalars R is a collection ofvectors, together with additive operation ”+” and a scalarmultiplication ”.” such that:

1 x and y ∈ S then x + y ∈ S

2 An additive identity 0 such that x + 0 = 0 + x = x.

3 For every x ∈ S there is an additive inverse y such thatx + y = 0

4 (x + y) + z + x + (y + z)

5 For a and b ∈ R, a.x ∈ S , (b.x) = (a.b.)x,(a + b)x = a.x + b.x, a.(x + y) = a.x + a.y

6 There is a mutltiplicative identity ”1” such that 1.x = x, anda scalar 0 ∈ R, with 0.x = 0

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 13: Signal Space

Vector Spaces II

Some Definitions

Let S be a vector space. If V ⊂ S is a subset such that V itself isa space, then V is a subspace of S.This notion of subspaces will be useful when we deal with HilbertSpaces and Projections.

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 14: Signal Space

Vector Spaces III

Signals as vectors

Under some simple assumptions, we can treat signals asvectors

A signal x(t) can be considered as an infinite-sized vector

Similarly, a sequence x [n] can be consdired to be an infinitelylong vector

Some issues arise with basis signals (convergence etc)

Any inifinte set of basis signals cannot span every possiblesignal x(t) - hence the need for Dirichlet conditions in FourierTransforms

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 15: Signal Space

Normed Vector Spaces I

Norms

For vector spaces, notions of length is natural.For any x ∈ S , a real valued ‖ x ‖ is a ”norm” is:

1 ‖ x ‖ is real, and ≥ 0

2 ‖ x ‖= 0 Iff x = 0

3 ‖ cx ‖= |c | ‖ x ‖4 ‖ x + y ‖≤‖ x ‖ + ‖ y ‖ (triangle inequality)

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 16: Signal Space

Normed Vector Spaces II

Common Norms

l1 norm:‖ x ‖1=

∑Mi=1 |xi | & ‖ x(t) ‖1=

∫ ba |x(t)|dt

lp norm:

‖ x ‖p=(∑M

i=1 |xi |p) 1

p& ‖ x(t) ‖p=

(∫ ba |x(t)|pdt

) 1p

l∞ norm:‖ x ‖∞= maxi |xi | & ‖ x(t) ‖∞= sup[a,b] |x(t)|All these satisfy all conditions of a norm.The norm used depends on the application.

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 17: Signal Space

Normed Vector Spaces III

Norms for Matrices

The pth norm of an mxn matrix A:

‖ A ‖p = maxx6=0‖Ax‖p‖x‖p = maxx6=0 ‖ A x

‖x‖p ‖p

When p = 2, the norm is referred to as the ”spectral norm”

‖ A ‖2 =√λmax(AHA)

The Frobenius norm of an mxn matrix

‖ A ‖F=(∑m

i=1

∑nj=1 |aij |2

) 12

=(tr(AHA)

) 12

It can be shown that ‖ A ‖2 ≤ ‖ A ‖F .

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 18: Signal Space

Normed Vector Spaces IV

Normed Linear Space

A normed linear space is a pair (S, ||.||) - a vector space with anorm defined.A space is said to be complete if all points arbitrarily close also liewithin the space.Example - the set of rational numbers is not complete since

√2 is

not in the space.A complete normed linear space is referred to as a ”Banach Space”

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 19: Signal Space

Inner Product I

Defintion and Properties

For vector space S with elements R, the inner product 〈., .〉S × S → R satisfies:

1 〈x, y〉 = (〈y, x〉)∗

2 〈cx, y〉 = c〈x, y〉3 〈x + y, z〉 = 〈x, z〉+ 〈y, z〉4 〈x, x〉 = 0 iff x = 0, 〈x, x〉 > 0

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 20: Signal Space

Inner Product II

Hilbert Space

A complete normed linear space with an inner product (with thenorm being the induced norm) is referred to as a Hilbert Space.Orthogonal Subspaces Let S be a vector space, and let V and Wbe subspaces of S. V and W are orthogonal if every vector in V isorthogonal to every vector in W.

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 21: Signal Space

Inner Product III

Examples

For signals 〈x(t), y(t)〉 =∫ ba x(t)y∗(t)dt

The induced norm is therefore‖ x(t) ‖2= 〈x(t), y(t)〉 =

∫ ba |x(t)|2dt = Ex

A matched filter operation can be written as an inner producty(T ) =

∫ T0 x(τ)h(T − τ)dτ can be thought of as an inner

product 〈x(t), h(T − t)〉An FIR filter y [n] =

∑M1l=0 h[m]x [n −M] can be viewed as an

inner product y [n] = hHx[n] = 〈x[n],h〉 whereh = [h[0], . . . , h[M − 1]]T , andx[n] = [x [n], x [n − 1], . . . , x [n −M + 1]]T

For matrices A and B, 〈A,B〉 = tr(BHA)

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 22: Signal Space

Inner Product IV

Projection Theorem

Let S be a Hilbert space, and V a subspace of S. Then for everyvector x ∈ S, there exists a unique vector vx ∈ V that is closest tox. ||x− vx || is minimized only when x− vx is othogonal to V.This theorem plays a fundamental role in communications,statistical signal processing and many other areas.

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 23: Signal Space

Inner Product V

Weighted Inner Product

〈x, y〉W = yHWx

For the induced norm to be proper, W should be positivedefinite

For signals, 〈x(t), y(t)〉w(t) =∫ ba x(t)w(t)y∗(t)dt (what are

the constraints on w(t)?)

In an M-ary communication system with sm being thecoordinates of the mth message signal sm(t), sufficientstatistics are r = sm + n. If noise is colored Gaussian,likelihood function will be:

f (r|s = sm) α exp(−1

2(r − sm)HR−1(r − sm))

where R is the covariance matrix of the noise n.

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 24: Signal Space

Inner Product VI

Cauchy-Schwartz Inequality

|〈x, y〉| ≤ ‖ x ‖‖ y ‖

with equality iff y = αx. Defined similarly for signals.Proof...Using this, the angle between real x and y can be understood:

cos(θ) =〈x, y〉

‖ x ‖2‖ y ‖2

For complex vectors x and y, we use:

cos(θ) =<{〈x, y〉}‖ x ‖2‖ y ‖2

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 25: Signal Space

Inner Product VII

Signals as Points in Space

1 x(t) and y(t) are ”orthogonal” if 〈x(t), y(t)〉 = 0

2 Commonly used signal representations use ”orthogonal basis”

3 Fourier series uses e j2πkt/T which are orthogonal overt ∈ [0,T ], but of energy T

4 Fourier Transform uses e jωt , t = −∞ . . .∞5 sin(ωt + φ) can be written as the sum of sin(ωt) and cos(ωt).

What is the angle between sin(ωt + φ) and cos(ωt)?

6 We can consider cos(ωt) and − sin(ωt) as basis for signals ofthe form sin(ωt + φ), which are points in this space

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 26: Signal Space

Inner Product VIII

Signals as Points in Space

1 Similarly, (almost all) periodic signals x(t) can be consideredas points in space spanned by e j2πkt/T for k = −∞ ldots,∞

2 Similarly, (almost all) aperiodic signals x(t) can be consideredas points in space spanned by e jω ∀ω

3 The Parsevals theorems should be understood in this context:∫ T

0|x(t)|2dt =

1

T

∑∞k=−∞ |ak |2 FS∫ ∞

−∞|x(t)|2dt =

1

∫ ∞−∞|X (jω)|2dω FT

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing

Page 27: Signal Space

Inner Product IX

Expectation - an Inner Product

The Inner product of real random variables X and Y with joint pdffXY (x , y) is defined as:

〈X ,Y 〉 = E{XY } =∫ ∫

xyfXY (x , y)dxdy

which is a weighted inner product!All geometric notions discussed so far extend to random variableswith minor change in terminology (cos(θ) will be replaced by ρ, thecorrelation coefficient for example).

Prof. Shankar Prakriya An Overview of Vector Spaces EEL718 Statistical Signal Processing