elec6111: detection and estimation theory minimax hypothesis testing

17
ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing In deriving Bayesdecision rule, w e assum ed thatw e know both the a priori probabilities 0 and 1 asw ellasthe likelihoods ) | ( 0 H y p and ) | ( 1 H y p . This m eansthatw e have both the know ledge ofthe m echanism generating the state ofthe nature and the m echanism affecting ourobservations(ourm easurem entsaboutthe state ofnature). Itis, how ever, possible thatw e m ay nothave accessto allthe inform ation. Forexam ple, w e m ay notknow the a priori probabilities.In such a case, the Bayesdecision rule isnota good rule since itcan only be derived fora given a priori probability. A n alternative to the B yeshypothesistesting, in thiscase, isthe M inimax H ypothesistesting. The m inim ax decision rule m inim izesthe m axim um possible risk, i.e., itm inim izes, ) ( ), ( max 1 0 R R overall . Let’slook at ) , ( 0 r , i.e., the overallrisk fora decision rule w hen the a priori probability is ] 1 , 0 [ 0 . Itcan be w ritten as: ) ( ) 1 ( ) ( ) , ( 1 0 0 0 0 R R r

Upload: sorena

Post on 23-Feb-2016

72 views

Category:

Documents


0 download

DESCRIPTION

ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing. ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing. ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing. ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryMinimax Hypothesis Testing

In deriving Bayes decision rule, we assumed that we know both the a priori probabilities 0 and 1 as well as the likelihoods )|( 0Hyp and )|( 1Hyp . This means that we have both the knowledge of the mechanism generating the state of the nature and the mechanism affecting our observations (our measurements about the state of nature). It is, however, possible that we may not have access to all the information. For example, we may not know the a priori probabilities. In such a case, the Bayes decision rule is not a good rule since it can only be derived for a given a priori probability.

An alternative to the Byes hypothesis testing, in this case, is the Minimax Hypothesis testing. The minimax decision rule minimizes the maximum possible risk, i.e., it minimizes,

)(),(max 10 RR over all .

Let’s look at ),( 0 r , i.e., the overall risk for a decision rule when the a priori

probability is ]1,0[0 . It can be written as: )()1()(),( 10000 RRr

Page 2: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryMinimax Hypothesis Testing

Note that, for a given decision rule , as 0 varies from 0 to 1, ),( 0 r goes linearly from ),0()(1 rR to ),1()(0 rR . Therefore, for a given decision rule the maximum value of ),( 0 r as 0 varies over the interval ]1,0[ occurs either at

00 or 10 and it is )}(),(max{ 10 RR . So minimizing )}(),(max{ 10 RR is equivalent to minimizing ),(max 010 0

r

.

Thus, the minimax decision rule is: ),(maxmin 010 0

r

Let 0 denote the optimum (Bayes) decision rule for the a priori probability 0 .

Denote the corresponding minimum Bayes risk by )( 0V , i.e., ),()( 00 rV . It is easy to show that )( 0V is a continuous concave function of 0 for

]1,[0 o and has the end points 11)0( CV and 00)1( CV .

Page 3: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryMinimax Hypothesis Testing

The following figure shows a typical graph of )( 0V and ),( 0 r . Let’s draw a tangent to )( 0V parallel to ),( 0 r .

Denote this line by ),(00 r .

Page 4: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryMinimax Hypothesis Testing

Since ),(00 r lies entirely below the line ),( 0 r , it has a lower maximum

compared to ),( 0 r . Also note that since it touches )( 0V at 00 then 0 is the

minimum risk (Bayes) rule for a priori probability 0 . Since for any ]1,0[0 we can draw a tangent to )( 0V and find the minimum risk rule as a Bayes rule, it is clear that the minimax decision rule is the Bayes rule for the value of 0 that maximizes )( 0V . Denoting this point by L , we note that point,

)()()}(),(max{ 1010 LLLLRRRR

Page 5: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryMinimax Hypothesis Testing

Proposition: The Minimax Test

Let L be the a priori probability that maximizes )( 0V and such that either 0L , or 1L , or ),()( 10 LL

RR then

L is a minimax rule.

Page 6: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryMinimax Hypothesis Testing

Proof

Let ),()( 10 LLRR then for any 0 we have,

),,(),(),(minmax 0010 0LL rrr

L

So, we have ),,(maxmin),(max),(minmax 010010010 000

rrrL

Also, for each we have ).,(minmax),(max 010010 00

rr

This implies that, ).,(minmax),(maxmin 010010 00

rr

Combining the two inequalities, we get, .),(minmax),(maxmin 010010 00

rr

Therefore, .),(maxmin),( 010 0

rrLL

That is, L is the minimax rule.

Page 7: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryMinimax Hypothesis Testing

Discussion

By definition: ),()(000 rV . So, for every ]1,0[0 , we have )(),( 0

00 Vr and

).(),( 00

0 Vr Since ),(0

0 r , as a function of 0 is a straight line, it has to be tangent to

)( 0V at .00 If )( 0V is differentiable at 0 , we have, ).()(/),()(

000 10000 RRddrV Now consider the case that )( 0V has an interior maximum but is not differentiable at that point. In this case we define two decision rules

00lim

LL and .lim

00 LL

The critical regions for these two decision rules are,

),|()()|())(1(|{ 01000101111 HypCCHypCCy LL and

),|()()|())(1(|{ 01000101111 HypCCHypCCy LL

Take a number ]1,0[q and devise a decision rule~

L that uses the decision rule L with probability

q and uses L with probability q1 . It means that it decides 1H if 1y , decides 0H if cy )( 1

and

decides 1H with probability q if y is on the boundary of 1 .

Page 8: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryMinimax Hypothesis Testing

Discussion

Note that the Bayes risk is not a function of q , so )(),(~

LL Vr L but the conditional risks depend on q ,

).()1()()(~

LLL jjj RqqRR

To achieve )()(~

1

~

0 LLRR , we need to choose,

.)()()()(

)()(

1010

10

LLLL

LL

RRRRRR

q

Note that )()()( 00

LL

RRV L , so we have:

.)()(

)(

LL

L

VVVq

This is called a randomized decision rule.

Page 9: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryExample: Measurement with Gaussian Error

Consider the measurement with Gaussian error with unifom costs

The function )( 0V can be written as,

)),(1)(1()()( 10

000

QQV

With

.2

)1

log( 10

0

0

01

2

We can find the rule making conditional risks )(0 R and )(1 R equal by letting,

))(1()( 10

QQ

and solving for .

Page 10: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryExample: Measurement with Gaussian Error

We can solve this by inspection and get:

.2

10 L

So, the minimax decision rule is:

.2/)(0

2/)(1)(

10

10

yifyif

yL

Conditional risks for measurement with Gaussian error

Page 11: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryNeyman-Pearson Hypothesis Testing

In Bayes hypothesis testing as well as minimax, we are concerned with the average risk, i.e., the conditional risk averaged over the two hypotheses. Neyman-Pearson test, on the other hand, recognizes the asymmetry between the two hypotheses. It tries to minimize one of the two conditional risks with the other conditional risk fixed (or bounded). In testing the two hypotheses 0H and 1H , the following situations may arise:

0H is true but 1H is decided. This is called a type I error or a false alarm. This comes from radar application where 0H represents “no target” and 1H is the case of “target present”. The probability of this event is called false alarm probability or false alarm rate and is denoted as )(FP

1H is true but 0H is decided. This is called a type II error or a miss. The probability of this event is called miss probability and is denoted as )(MP

0H is true and 0H is decided. Probability of this event is )(1 FP . 1H is true and 1H is decided. This case represents a detection. The detection probability is

)(1)( MD PP . In testing 0H versus 1H , one has to tradeoff between the probabilities of two types of errors. Neyman-Pearson criterion makes this tradeoff by bounding the probability of false alarm and minimizing miss probability subject to this constraint, i.e., the Neyman-Pearson test is,

)(max DP subject to ,)( FP

where is the bound on false alarm rate. It is called the level of the test.

Page 12: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryNeyman-Pearson Hypothesis Testing

For obtaining a general solution to the Neyman-Pearson test, we need to define a randomized decision rule. We define the randomized test,

L

L

L

yLifyLifqyLif

yL

)(0)()(1

)(~

where L is the threshold corresponding to L .

While in a non-randomized rule, )(y gives the decision, in a randomized rule, )(~

yL gives the probability of

decision. Then we have,

,)|()()}({)( 0

~~

0

~dyHypyYEPF

where {.}0E is expectation under hypothesis 0H . Also,

.)|()()}({)( 1

~~

1

~dyHypyYEPD

Page 13: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryNeyman-Pearson Lemma

Consider a hypothesis pair 0H and 1H :

00 ~: PYH Versus

11 ~: PYH where jP has density )|()( jj Hypyp for 1,0j . For 0 , the following statements are true:

1. Optimality: Let ~ be any decision rule satisfying .)(

~ FP Let

~ be any decision rule of the form

)(),|()|(0)|()|()()|()|(1

01

01

01~A

HypHypifHypHypifyHypHypif

where 0 and 1)(0 y are such that .)(~

FP Then ).()(~~ DD PP

This means that any size-α decision rule of form (A) is Neyman-Pearson rule.

2. Existence: For any )1,0( there is a decision rule, NP

~ , of form (A) with 0)( y for which

.)(~

NPFP 3. Uniqueness: Suppose that is any Neyman-Pearson rule of size-α for 0H versus 1H . Then must be of

the form (A).

Page 14: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryNeyman-Pearson Lemma (Proof)

1. Not that, by definition, we always have 0)]|()|()][()([ 01

~~ HypHypyy (why?)

So, we have,

.0)]|()|()][()([ 01

~~ dyHypHypyy

Expanding the above expression, we get,

.)|()()|()()|()()|()( 0

~

0

~

1

~

1

~

dyHypydyHypydyHypydyHypy

Applying the expressions for the detection probability and false alarm rate, we have:

.0)]([)]()([)()(~~~~~

FFFDD PPPPP

2. Let 0 be the smallest number such that (look at the Figure in next slide): )]|()|([ 0010 HYpHYpP .

Then if )]|()|([ 0010 HYpHYpP , choose,

)]|()|([)]|()|([

0010

00100 HYpHYpP

HYpHYpP

Otherwise, choose 0 arbitrarily. Consider a Neyman-Pearson decision rule, NP

~ , with 0 and 0)( y . For this

decision rule, the false alarm arte is,

)]|()|([)]|()|([}{)( 001000010

~

0

~HYpHYpPHYpHYpPEP NPNPF .

Page 15: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryNeyman-Pearson Lemma (Proof)

.

2. See the text.

Page 16: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryNeyman-Pearson Lemma (Example): Measurement with Gaussian Error

For this problem, we have,

),(1)()())([)]|()|([ 00010

QYPyLPHYpHYpP

where .2

)log( 10

01

2

Any value of can be achieved by choosing,

.)1()( 01

01

0 Q Since 0)( 0 YP , the choice of 0 is arbitrary and we can choose 10 . So, we have

.01

)(0

0~

yifyif

yNP

Page 17: ELEC6111: Detection and Estimation Theory Minimax Hypothesis Testing

ELEC6111: Detection and Estimation TheoryNeyman-Pearson Lemma (Example): Measurement with Gaussian Error

The detection probability for NP

~ is

dQQQQQYPYEP NPNPD

)()()()()}({)( 10111001

~

1

~