stochastic processes -...

28
Stochastic Processes

Upload: others

Post on 10-Oct-2019

14 views

Category:

Documents


0 download

TRANSCRIPT

Stochastic Processes

Definition

A random variable is a number assigned to every outcome

of an experiment. X( )

A stochastic process is the assignment of a function of t

to each outcome of an experiment. X t,( )

The set of functions corresponding

to the N outcomes of an experiment is called an ensemble and each

member is called a sample function of the stochastic

process.

X t,

1( ),X t,2( ), ,X t,

N( ){ }

X t,

i( )

A common convention in the notation describing stochastic

processes is to write the sample functions as functions of t only

and to indicate the stochastic process by instead of

and any particular sample function by instead of . X t( )

X t,( )

X

it( )

X t,

i( )

Definition

Ense

mble

Sample

Function

The values of at a particular time define a random variable

or just . X t( )

t1

X t

1( )

X1

Example of a Stochastic Process

Suppose we place a

temperature sensor at

every airport control

tower in the world

and record the

temperature at noon

every day for a year.

Then we have a

discrete-time,

continuous-value

(DTCV) stochastic

process.

Example of a Stochastic Process

Suppose there is a large number

of people, each flipping a fair

coin every minute. If we assign

the value 1 to a head and the

value 0 to a tail we have a

discrete-time, discrete-value

(DTDV) stochastic process

Continuous-Value vs. Discrete-Value

A continuous-value (CV)

random process has a pdf

with no impulses. A discrete-

value (DV) random process

has a pdf consisting only of

impulses. A mixed random

process has a pdf with

impulses, but not just

impulses.

Deterministic vs. Non-

Deterministic

A random process is deterministic if a sample function

can be described by a mathematical function such that its

future values can be computed. The randomness is in the

ensemble, not in the time functions. For example, let the

sample functions be of the form,

X t( ) = Acos 2 f

0t +( )

and let the parameter be random over the ensemble but

constant for any particular sample function.

All other random processes are non-deterministic.

Stationarity

If all the multivariate statistical descriptors of a random

process are not functions of time, the random process is

said to be strict-sense stationary (SSS).

A random process is wide-sense stationary (WSS) if

E X t

1( )( )

E X t

1( )X t

2( )( )

is independent of the choice of t1

depends only on the difference between and t1

t2

and

Ergodicity

If all of the sample functions of a random process have the

same statistical properties the random process is said to be

ergodic. The most important consequence of ergodicity is that

ensemble moments can be replaced by time moments.

E Xn( ) = lim

T

1

TX

nt( )dt

T /2

T /2

Every ergodic random process is also stationary.

Measurement of Process

Parameters

The mean value of an ergodic random process can be estimated

by

X =1

TX t( )dt

0

T

where is a sample function of that random process. In

practical situations, this function is usually not known. Instead

samples from it are known. Then the estimate of the mean

value would be

X =1

NX

i

i=1

N

where is a sample from . X

i

X t( )

X t( )

Measurement of Process

ParametersTo make a good estimate, the samples from the random process

should be independent.

Correlation Functions

The Correlation Function

If X(t) is a sample function of one stochastic CT process and Y(t) is

a sample function from another stochastic CT process and

X

1= X t

1( ) and Y

2= Y t

2( )

then

RXY

t1,t

2( ) = E X1Y

2

*( ) = X1Y

2

*f

XYx

1, y

2;t

1,t

2( )dx1dy

2

is the correlation function relating X and Y. For stationary stochastic

continuous-time processes this can be simplified to

R

XY( ) = E X t( )Y

*t +( )( )

If the stochastic process is also ergodic then the time correlation

function is

RXY

( ) = limT

1

TX t( )Y

*t +( )dt

T /2

T /2

= X t( )Y*

t +( ) = RXY

( )

Autocorrelation

If X and Y represent the same stochastic CT process then the

correlation function becomes the special case called

autocorrelation.

R

X( ) = E X t( )X

*t +( )

For an ergodic stochastic process,

RX

( ) = limT

1

TX t( )X

*t +( )dt

T /2

T /2

= X t( )X*

t +( ) = RX

( )

Autocorrelation

R

Xt,t( ) = E X

2t( )( )

Mean-

squared

value of X

RX

0( ) = E X2

t( )( ) and RX

0( ) = limT

1

TX

2t( )dt

T /2

T /2

= X2

t( )

Mean-

squared

value of X

Average

Signal

Power of X

For WSS stochastic continuous-time processes

The Correlation Function

If is a sample function of one stochastic DT process and

is a sample function from another stochastic DT process and

X

1= X n

1and Y

2= Y n

2

then

RXY

n1,n

2= E X

1Y

2

*( ) = X1Y

2

*f

XYx

1, y

2;n

1,n

2( )dx1dy

2

is the correlation function relating X and Y. For stationary stochastic

DT processes this can be simplified to

R

XYm = E X n Y

*n + m( )

If the stochastic DT process is also ergodic then the time correlation

function is

RXY

m = limN

1

2NX n Y

*n + m

n= N

N 1

= X n Y*

n + m = RXY

m

X n

Y n

Autocorrelation

If X and Y represent the same stochastic DT process then the

correlation function becomes the special case called

autocorrelation.

R

Xm = E X n X

*n + m( )

For an ergodic stochastic DT process,

RX

m = limN

1

2NX n X

*n + m

n= N

N 1

= X n X*

n + m = RX

m

Autocorrelation

R

Xn,n = E X

2n( )

Mean-

squared

value of X

Mean-

squared

value of X

Average

Signal

Power of X

For WSS stochastic discrete-time processes

RX

0 = E X2

n( ) and RX

0 = limN

1

2NX

2n

n= N

N 1

= X2

n

Autocorrelation

Example

Let be ergodic and let

each sample function be a

sequence of pulses of width

T whose amplitudes are A and

zero with equal probability and

with uniformly-distributed

pulse positions. Let the pulse

amplitudes be independent of

each other. Find the

autocorrelation.

X t( )

Autocorrelation

For delays whose magnitude is greater than the pulse width,

the pulse amplitudes are uncorrelated and therefore

R

X( ) = E X t( )X

*t +( )( ) = E X t( )X t +( )( )

R

X( ) = E X t( )( )E X t +( )( ) = A / 2( )

2

= A2

/ 4 , > T

Example

AutocorrelationFor delays whose magnitude is less than the pulse width it

is possible that the two times, separated by , are in the same

pulse. When is zero that probability is one and as approaches

the pulse width the probability approaches zero, linearly.

P t and t+ are in the same pulse interval( ) =

T

T, < t

a

When the two times are in the same pulse there are two possible

cases of and both with probability 1/2.

E X t( )X t +( )( ) = A

2P A( ) + 0

2P 0( ) = A

2/ 2

X t( )X t +( ) A A

When the two times are not in the same pulse there are four

possible cases of , , , each

with probability 1/4. X t( )X t +( )

0 0

A A 0 0 0 0 0 0

E X t( )X t +( )( ) = A

2P A( )P A( ) + 0

2P A( )P 0( ) + 0

2P 0( )P A( ) + 0

2P 0( )P 0( ) = A

2/ 4

AutocorrelationFor delays of magnitude less than the pulse width,

and the overall autocorrelation function is

RX

( ) = Ptimes are in

same pulseA

2 / 2 + Ptimes are not

in same pulseA

2 / 4

=A

2

2

T

T+

A2

41

T

T=

A2

4

2T

T

RX

( ) =A

2

4

2T

T, T

1 , > T

=A

2

4tri

T

T+1

Properties of Autocorrelation

Autocorrelation is an even function

R

X( ) = R

X( ) or R

Xm = R

Xm

The magnitude of the autocorrelation value is never greater

than at zero delay.

R

X( ) R

X0( ) or R

Xm R

X0

If X has a non-zero expected value then

will also and it will be the square of the expected value of X. R

X( ) or R

Xm

If X has a periodic component then will

also, with the same period. R

X( ) or R

Xm

Properties of Autocorrelation

If is ergodic with zero mean and no periodic components

then

limRX

( ) = 0 or limm

RX

m = 0

X t( ){ }

Only autocorrelation functions for which

F R

X ( ){ } 0 for all f or F RX

m{ } 0 for all

are possible

A time shift of a function does not affect its autocorrelation

Autocovariance

C

X( ) = R

X( ) E

2X( ) or C

Xm = R

Xm E

2X( )

Autocovariance is similar to autocorrelation. Autocovariance

is the autocorrelation of the time-varying part of a signal.

Measurement of Autocorrelation

Autocorrelation of a real-valued WSS stochastic CT process can

be estimated by

R̂X

( ) =1

TX t( )X t +( )dt

0

T

, 0 << T

But because the function is usually unknown it is much

more likely to be estimated from samples.

R̂x

nTs

( ) =1

N nx k x k + n

k=0

N n 1

, n = 0,1,2,3,…, M << N

where the x’s are samples taken from the stochastic process X

and the time between samples is . T

s

X t( )

Measurement of Autocorrelation

The expected value of the autocorrelation estimate is

E R̂X

nTs

( )( ) = E1

N nX k X k + n

k=0

N n 1

=1

N nE X k X k + n( )

k=0

N n 1

E R̂X

nTs

( )( )=1

N nR

XnT

s( )

k=0

N n 1

= RX

nTs

( )

This is an unbiased estimator.

CrosscorrelationProperties

R

XY( ) = R

YX( ) or R

XYm = R

YXm

R

XY( ) R

X0( )R

Y0( ) or R

XYm R

X0 R

Y0

If two stochastic processes X and Y are statistically independent

R

XY( ) = E X( )E Y

*( ) = RYX

( ) or RXY

m = E X( )E Y*( ) = R

YXm

If X is stationary CT and is its time derivative X

R

XX( ) =

d

dR

X( )( )

RX

( ) =d

2

d2

RX

( )( )

If and X and Y are independent and at least

one of them has a zero mean Z t( ) = X t( ) ± Y t( )

R

Z( ) = R

X( ) + R

Y( )