wherefore similar tests?

8
Statistics & Probability Letters 54 (2001) 283 – 290 Wherefore similar tests? Arthur Cohen , Harold B. Sackrowitz Department of Statistics, Rutgers State University of NJ, 110 Frelinghuysen Rd., Hill Center Busch Campus, Piscataway, NJ 08854-8019, USA Received November 2000; received in revised form February 2001 Abstract Similarity of a test is often a necessary condition for a test to be unbiased (in particular for a test to be uniformly most powerful unbiased when such a test exists). Lehmann (Testing Statistical Hypotheses, 2nd Edition, Wiley, New York, 1986) describes the connection between similar tests and uniformly most powerful unbiased tests. The methods to achieve these properties as outlined in Lehmann are used extensively. In any case, an admissible similar test is frequently one that can be recommended for practical use. In some constrained parameter spaces however, we show that admissible similar tests sometimes completely ignore the constraints. In some of these cases we call such tests constraint insensitive. The tests seem not to be intuitive and perhaps should not be used. On the other hand, there are models with constrained parameter spaces where similar tests do take into account the constraints. In these cases the admissible test is called constraint sensitive. We oer a systematic approach that enables one to determine whether an admissible similar test is constraint insensitive or not. The approach is applied to three classes of models involving order restricted parameters. The models include testing for homogeneity of parameters, testing subsets of parameters, and testing goodness of t of a family of discrete distributions. c 2001 Elsevier Science B.V. All rights reserved MSC: 62F03; 62F30 Keywords: Order restricted inference; Uniformly most powerful tests; Constraint insensitive; Complete sucient statistics; Interference in genetic maps 1. Introduction Consider a hypothesis testing problem concerning a k × 1 vector of parameters, . Assume the parameter space is . Let be a subset of and let the null hypothesis be H 0 : 0 where 0 is a subset of . The alternative is H 1 : \ 0 . Dene ! to be the common boundary of H 0 and H 1 . For an observed random vector X, let (x) be the test function which represents the conditional probability of rejecting H 0 for the observed x. Let () represent the probability of rejecting H 0 for the test (x). A similar test is one for which () is constant for !. In some cases, one begins by looking for a statistic that is both Research supported by NSF Grant DMS-9618716. Corresponding author. Fax: +1-732-445-3428. 0167-7152/01/$ - see front matter c 2001 Elsevier Science B.V. All rights reserved PII: S0167-7152(01)00069-4

Upload: arthur-cohen

Post on 02-Jul-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Wherefore similar tests?

Statistics & Probability Letters 54 (2001) 283–290

Wherefore similar tests?�

Arthur Cohen ∗, Harold B. SackrowitzDepartment of Statistics, Rutgers State University of NJ, 110 Frelinghuysen Rd., Hill Center Busch Campus,

Piscataway, NJ 08854-8019, USA

Received November 2000; received in revised form February 2001

Abstract

Similarity of a test is often a necessary condition for a test to be unbiased (in particular for a test to be uniformlymost powerful unbiased when such a test exists). Lehmann (Testing Statistical Hypotheses, 2nd Edition, Wiley, NewYork, 1986) describes the connection between similar tests and uniformly most powerful unbiased tests. The methods toachieve these properties as outlined in Lehmann are used extensively. In any case, an admissible similar test is frequentlyone that can be recommended for practical use. In some constrained parameter spaces however, we show that admissiblesimilar tests sometimes completely ignore the constraints. In some of these cases we call such tests constraint insensitive.The tests seem not to be intuitive and perhaps should not be used. On the other hand, there are models with constrainedparameter spaces where similar tests do take into account the constraints. In these cases the admissible test is calledconstraint sensitive. We o;er a systematic approach that enables one to determine whether an admissible similar test isconstraint insensitive or not. The approach is applied to three classes of models involving order restricted parameters.The models include testing for homogeneity of parameters, testing subsets of parameters, and testing goodness of <t of afamily of discrete distributions. c© 2001 Elsevier Science B.V. All rights reserved

MSC: 62F03; 62F30

Keywords: Order restricted inference; Uniformly most powerful tests; Constraint insensitive; Complete su?cient statistics;Interference in genetic maps

1. Introduction

Consider a hypothesis testing problem concerning a k × 1 vector of parameters, �. Assume the parameterspace is �. Let � be a subset of � and let the null hypothesis be H0 : �∈�0 where �0 is a subset of �.The alternative is H1 : �∈� \ �0. De<ne ! to be the common boundary of H0 and H1. For an observedrandom vector X, let �(x) be the test function which represents the conditional probability of rejecting H0for the observed x. Let ��(�) represent the probability of rejecting H0 for the test �(x). A similar test isone for which ��(�) is constant for �∈!. In some cases, one begins by looking for a statistic that is both

� Research supported by NSF Grant DMS-9618716.∗ Corresponding author. Fax: +1-732-445-3428.

0167-7152/01/$ - see front matter c© 2001 Elsevier Science B.V. All rights reservedPII: S0167 -7152(01)00069 -4

Page 2: Wherefore similar tests?

284 A. Cohen, H.B. Sackrowitz / Statistics & Probability Letters 54 (2001) 283–290

su?cient and complete for the family of distributions belonging to the boundary. The next step is to studythe conditional problem given the su?cient complete statistic found in the <rst step. This follows as everysimilar test must have Neyman structure (i.e., must have conditional size �). Typically the parameter space ofthis conditional distribution will be of lower dimension than the original parameter space. The methodologyo;ered in Lehmann (1986) on this topic is widely used and studied extensively.Recently Perlman and Wu (1999) gave an example in which they felt that a given admissible similar test

(in fact it was a uniformly most powerful unbiased test) was undesirable. They argued that the admissiblesimilar test did not make practical sense. Furthermore they contrasted that test with the likelihood ratio test(LRT). They concluded, based on intuitive grounds, that the LRT should be preferred even though the lattertest was not similar. Their example dealt with a situation where the parameter space was order restricted. Theuniformly most powerful unbiased test in this situation ignored the order restrictions on the parameter space.In this paper, we study the issue of similar tests when the parameter space is constrained. Constrained

parameter spaces include those that are commonly called order restricted parameter spaces. We seek to delin-eate cases where admissible similar tests are constraint insensitive (CI) and cases where they are constraintsensitive (CS). In cases where the test is CI one may not wish to use such a test. One main contributionof the paper is the mere recognition that an admissible similar test, possibly even a UMPU test, in somegenerality, can be CI and perhaps should not be used. In other words, the standard approach may very wellbe inappropriate. We o;er a systematic method that can be used to determine whether a test is CI or not.The method will be applied to three classes of models. All three classes of models entail exponential familydistributions. The last of the three models is particularly interesting since it is related to interference, an issuein genetic maps. We now indicate the three models and the results connected with each.In the <rst model, we assume we have k independent variables, each with the same exponential family

density except that the parameters may have di;erent values. We test homogeneity of the parameters vs. thealternative that the parameter lies in a polyhedral cone. Conditions on the cone will be speci<ed later. Such aclass of models is studied in detail in several chapters of Robertson et al. (1988). For all the examples withinthis section we show that any admissible similar test is CS.The second class of models is motivated by the example given in Perlman and Wu (1999). Here, again we

assume we have k independent variables, each with a one parameter exponential family density that di;er onlyin the parameter. This time however, the null hypothesis is that the <rst k−r parameters equal speci<ed valueswhile the totality of all parameters satisfy some linear inequalities. The alternative consists of parameter pointsthat satisfy the linear inequalities, but excludes null points. The precise model is speci<ed later in Section 4.If the linear inequalities (order restrictions) are such that the k parameters lie in a rectangular set, then anyadmissible similar test is CS. On the other hand, if the order restrictions are such that the parameters do notlie in a rectangular set, then there are admissible similar tests which are CI. In Perlman and Wu’s examplethe parameters are restricted to lie in a non-rectangular set, and so in their example the admissible similartest is CI.The <nal class of models deals with testing goodness of <t of a one-parameter discrete distribution with an

exponential family mass function; for example, binomial, Poisson, truncated Poisson, etc. Testing goodnessof <t of Poisson or truncated Poisson when cell probabilities satisfy linear inequalities (order restrictions)describes the problem of testing for interference in genetic maps. See for example, Ott (1996). In suchinstances, under appropriate conditions which are not very restrictive, there are admissible similar tests forgoodness of <t that are found to be CI.In Section 2, we give de<nitions that enable us to formalize the problem. We introduce the notion of a

test which is CI and provide the systematic method that can be used to determine whether a test is CI orCS. In Section 3, we study order restricted models where the null hypothesis is homogeneity of parameters.In Section 4, we study the problem of testing the null hypothesis in which a proper subset of parameters isspeci<ed, while all the parameters lie in a restricted space. Finally in Section 5, we study goodness of <twhen the parameters of an underlying multinomial distribution satisfy linear constraints.

Page 3: Wherefore similar tests?

A. Cohen, H.B. Sackrowitz / Statistics & Probability Letters 54 (2001) 283–290 285

2. De�nitions

Let X be a k × 1 vector with exponential family density fX(x; �), where � is a k × 1 vector representingthe natural parameter. Let the mean vector EX= �. Let � be the space of all �. Let � be a convex subsetof � and let the null hypothesis be H0 : �∈�0 where �0 is a subset of �. The alternative is H1 : �∈� \�0.The common boundary of H0 and H1 is denoted by !.Assume that Y(2), an r×1 random vector exists, for some 16 r ¡k, such that Y(2) is a complete su?cient

statistic for the family of distributions when �∈!. Let �(2) =EY(2). Assume further that the exponentialfamily density fX(x; �(�)) can be written as

fX|Y(2) (x|y(2); �(1))fY(2) (y(2); �(1); �(2)); (2.1)

where fX|Y(2) is the conditional density of X given Y(2) and this conditional density is exponential familywith natural parameter �(1);fY(2) is the marginal density of Y(2) which depends on �(1); �(2) and this marginaldistribution is also exponential family. The vector �=(�(1)

′; �(2)

′)′ is a 1–1 transformation of � and a 1–1

transformation of �. Furthermore de<ne �; �0 as follows:

�= {�: �= �(�); �∈�} and

�0 = {�: �= �(�); �∈�0}:The <rst de<nition is motivated by the following: In some parameter constrained problems, if one actu-

ally knew �(2), then this would provide important information concerning �(1). Now Y(2) de<nitely containsinformation about �(2). Thus, we are led to a connection between Y(2) and �(1).

De�nition 2.1. The random vector Y(2) contains no constraint information concerning �(1) if every section of�(1) given �(2); i.e., �(1)(�(2)) in � is the same. Otherwise Y(2) does contain constraint information concern-ing �(1).

De<nition 2.1 means that when Y(2) contains no constraint information concerning �(1), the set � is aCartesian product of a set in the space of �(1) and a set in the space of �(2).At this point we note that the original hypothesis testing problem is equivalent to H0 : �∈�0 vs. H1 :

�∈� \ �0.

De�nition 2.2. The original hypothesis testing problem is said to be made “larger” if the set � is replacedby a larger convex set �̃. That is, � is a proper subset of �̃. (Note that the speci<cation of parametersdetermining �0 within � may entail a larger set �̃0 as a subset of �̃ in the larger problem.) The largerproblem then is H0 : �∈ �̃0 vs. H1 : �∈ �̃ \ �̃0.

De�nition 2.3. If �(x) is an admissible similar test for a problem when Y(2) contains constraint informationabout �(1) and �(x) is admissible similar for a larger problem in which Y(2) does not contain constraintinformation about �(1), then �(x) is said to be CI. Further, if �(x) is not CI, �(x) is CS.

Thus the approach to determining whether an admissible similar test is CI or CS is to follow these steps.1. For the given model and testing problem determine Y(2).2. Determine �(2) =EY(2).3. Find the conditional distribution of X|Y(2) and determine if Y(2) contains constraint information concern-

ing �(1). If Y(2) does contain information, go to the next step. If Y(2) does not contain information, then agiven admissible similar test is CS.

Page 4: Wherefore similar tests?

286 A. Cohen, H.B. Sackrowitz / Statistics & Probability Letters 54 (2001) 283–290

4. Find a larger problem for which the given admissible similar test is still admissible similar but for whichY(2) contains no order information concerning �(1). If this can be done, then the given admissible similar testis CI. Otherwise it is CS.

3. Constraint sensitive tests in some order restricted models

In this section, we consider a set of models studied in Robertson et al. (1988). Let Xij; i=1; : : : ; k;j=1; 2; : : : ; n, be independently distributed according to a one parameter exponential family with density(with respect to Lebesgue measure or counting measure)

fXi(x; �i)= exp[�(�i)x + S(x) + q(�i)] (3.1)

for x∈A; �i ∈ (�; Q�); −∞6 �¡ Q�6∞. Assume �(�i) is di;erentiable, �′(�i)¿ 0 and q′(�i)=− �i�′(�i). Itfollows that EXij = �i.Also assume

�( Q�)− �(�)=∞: (3.2)

Notice that this assumption includes the normal distribution with unknown mean, the Poisson distribution, thebinomial distribution, and the gamma distribution. At this point we may, without loss of generality, take n=1and replace Xij with Xi. We let X=(X1; : : : ; Xk)′ and �=(�1; : : : ; �k)′. Also let B be a (k − 1) × k matrixwhose rows are contrasts and consider the polyhedral cone

C= {� : B�¿ 0}: (3.3)

Assume that the rows of B are linearly independent and non-redundant in the sense that no subset of themcould describe C. The testing problem is H0 : �∈�0, where �0 = {� : B�=0 (i.e., �1 = �2 = · · ·= �k)} vs.H1 : �∈� \ �0 where �= {� : B�¿ 0 (i:e:; �∈C)}.Now recognize that Y (2) =

∑ki=1 Xi is a complete su?cient statistic under H0 and (X|Y (2)) has an exponential

family distribution whose natural parameter is labeled �(1). Also note that �(2) =∑k

i=1 �i and so k�6 �(2)¡k Q�.We now give a theorem that speci<es a su?cient condition that implies that Y (2) contains no constraintinformation concerning �(1). This in turn implies that any admissible similar test is CS for this model.

Theorem 3.1. Suppose that the cone C in (3:3) is such that a minimal (maximal) element of � exists. Thatis; there is an index i∗ such that �i∗ =min(�1; : : : ; �k) for every �∈C. Then Y (2) contains no constraintinformation concerning �(1).

Remark. Cones that contain a minimal (maximal) element include the simple order cone, the tree order cone,the umbrella cone, and the star-shaped cone. See Robertson et al. (1988) for de<nitions and discussion ofthese cones.

Proof of Theorem 3.1. First without loss of generality we take �k as a minimal element in � in C. Now use(3.1) to write the joint density of X1; : : : ; Xk and then express the joint density as a product of the conditionaldensity of X given Y (2), times the marginal density of Y (2). Using Lemma 8 in Chapter 2 of Lehmann (1986),the conditional density may be expressed as

Cy(2) (�(1)) exp

k−1∑i=1

xi(�(�i)− �(�k)) d y(2) (x(1)); (3.4)

where x(1) = (x1; : : : ; xk−1)′ and �(1) = (�11; �21; : : : ; �(k−1)1)′; �i1 = �(�i)−�(�k) and y(2) is a measure dependingon y(2). Now recall that EY (2) = �(2) is such that k�6 �(2)¡k Q�, that � is a strictly increasing function

Page 5: Wherefore similar tests?

A. Cohen, H.B. Sackrowitz / Statistics & Probability Letters 54 (2001) 283–290 287

of �i, that �k is a minimal element, and assumption (3.2). It follows that for every �∈C, regardless of�(2); 06 �i ¡∞; i=1; 2; : : : ; k − 1. This completes the proof of Theorem 3.1.

4. Constraint insensitive tests for testing a proper subset of parameters

We start this section with an example, which is only a slight modi<cation of the setup in Perlman and Wu(1999).

Example 4.1. Let X1; X2 be independent normal with means �1; �2 respectively and with common varianceone. Let X=(X1; X2)′; �=(�1; �2)′. Let

�= {� : �1¿ 0; �2¿ a�1; a =0}: (4.1)

Test H0 : �∈�0, where �0 = {� : �1 = 0; �∈�} vs. H1 : �∈� \�0. Note that Y2 =X2 is a su?cient completestatistic on !, the boundary of H0 and H1. Note also that the test which rejects H0 when X1¿C (C aconstant) is an admissible similar test. (The test is uniformly most powerful unbiased.) Now observe that ifa¿ 0, and �2 = 0, then the only possible value for �1 is zero, i.e., the section in �1 at �2 = 0 is the point (0,0). The section in �1 at �2 = 1, say, is the interval (0; 1=a). If a¡ 0, then the section in �1 at �2 = − a is(1;∞) while the section at �2 = 0 is (0;∞). This analysis demonstrates that X2 contains constraint informationconcerning �1.Next, consider the larger problem in which H0 : �∈ �̃0 = {� : �1 = 0; �∈ �̃} vs. H1 : �∈ �̃ \ �̃0, where

�̃= {� : �1¿ 0; −∞¡�2¡∞}. For this larger problem X2 contains no constraint information concerning�1. Furthermore the test which rejects when X1¿C is an admissible similar test for the larger problem aswell as the smaller problem. Thus this uniformly most powerful unbiased test is CI.

To generalize Example 4.1, we consider a k×1 random vector X consisting of independent normal variableswith mean vector � and covariance matrix I . Partition X=(X(1)

′;X(2)

′)′; �=(�(1)

′; �(2)

′)′ with X(2) being of

order r × 1. Test H0 : �∈�0, where �0 = {� : �(1) = 0; �∈�} vs. H1 : �∈� \ �0. Here � is a polyhedralcone whose generators are the rows of the matrix A where

A=

(A11 A12

A21 A22

); (4.2)

A has rank k; A11 is an r× (k− r) matrix of zeros, A12 is the r× r identity matrix, A21 is the (k− r)× (k− r)matrix all of whose elements are non-negative (with at least one positive element in each row), and A22 isof order (k − r)× r such that each row has a non-zero element. Under these conditions we prove

Theorem 4.1. There are admissible similar tests which are constraint insensitive.

Proof. A su?cient complete statistic under ! is X(2). We show <rst that X(2) contains constraint informationconcerning �(1). To do this, assume all the elements of A22 are non-negative. Then the section at �

(2) = 0reduces to �(1) = 0. On the other hand, take �(2) to be the <rst row of A12. The �

(1) section at this �(2) containsthe <rst row of A21 which is not a zero vector. Thus the �

(1) sections at �(2) = 0 and at �(2) equal the <rstrow of A22 are di;erent.Next suppose a row of A22 has a negative element (Without loss of generality, let it be the <rst row.).

Then the �(1) section at �(2) = 0 contains the point �(1) = 0. On the other hand, the �(1) section at �(2)

equals the <rst row of A22, cannot contain the point �(1) = 0. For suppose it did. Denoting the rows of A

by a(i); i=1; 2; : : : ; k, we would have∑k

i=1 $ia(i) is a vector whose <rst (k − r) components are zero, with$i¿ 0. The last r components would be the <rst row of A22. To achieve this $k−r+1; : : : ; $k must be zero. But

Page 6: Wherefore similar tests?

288 A. Cohen, H.B. Sackrowitz / Statistics & Probability Letters 54 (2001) 283–290

then the last r components are precisely ($k−r+1; : : : ; $k), all of which are greater than or equal to zero. Thiscontradiction shows that the �(1) sections vary with �(2) and con<rms that X(2) contains constraint informationconcerning �(1).The larger problem is when �̃= {� : �(1)¿ 0}. Under these conditions it is clear that there are admissible

similar tests which are CI.

The above result for a normal model can be generalized to a binomial, Poisson, or gamma model whereX is the random vector of independent components and � is the vector of means. Test H0 : �∈�0 where�0 = {� : �(1) = �(1)0 ; �∈�} vs. H1 : �∈� \ �0, where � is de<ned similarly to the model in Theorem 4.1and �(1)0 is a speci<ed point in the interior of the space of means. In other words, � is the intersection ofa shifted cone and the original parameter space. The role of �(1)0 is the same as the role of �(1)0 = 0 in thenormal case.

5. Constraint insensitive tests of goodness of �t

The model in this section entails letting U;U1; : : : ; Un be independent discrete random variables, each takingon values (0; 1; 2 : : :) with probabilities p=(p0; p1; p2; : : :)′, respectively. Let Xi; i=0; 1; 2; : : : be the numberof Uj; j=1; : : : ; such that Uj = i. Thus

P{X0 = x0; X1 = x1; : : :}= n!∏i=0 xi!

∏i=0

pxii ;

∑i=0

xi= n;∑i=0

pi=1: (5.1)

Test H0: The distribution of U has a mass function

fU (u)=C(�)h(u)eu�(�); (5.2)

where �=EU and �(�), the natural parameter is an increasing function of �; � is unknown. Further the pdetermined from (5.2) is restricted to lie in �, where � is a proper subset of all possible p. The set � isdetermined by a set of linear constraints of the form

p=Aq; (5.3)

where A is an (M + 1)× (N + 1) matrix of rank min(M + 1; N + 1); M¿ 1; N¿ 1, all of whose elementsare non-negative. Also q=(q0; : : : ; qN )′ is such that

∑Ni=0 qi=1; qi¿ 0; i=1; 2; : : : ; N . (When testing for a

discrete distribution whose support is 0; 1; 2; : : : we will pool cells corresponding to pM ; pM+1; : : : and useXM +XM+1 + · · · to be the observed frequency in the (M +1)th cell.) The alternative hypothesis is H1 : p∈�.We recognize at this point that, in the case where U has a Poisson distribution with unknown parameter

�, and A is an N × N matrix whose (ij)th element is

aij =

(j

i

)(12

)j; i6 j

= 0; i ¿ j;

the problem is that of testing for interference in genetic maps. See Ott (1996).We now have

Theorem 5.1. Suppose the (M + 1)× (N + 1) matrix A is such that the set � satis:es the following:

The set of p that satisfy the null hypothesis; i:e:; p∈� and pi= c(�)h(i)ei�(�);

i=0; 1; 2; : : : ; M; make up the common boundary of H0 and H1; (5.4)

p∗=Aq∗ for q∗=(0; 0; : : : ; 1)′ (5.5)

Page 7: Wherefore similar tests?

A. Cohen, H.B. Sackrowitz / Statistics & Probability Letters 54 (2001) 283–290 289

is such that p∗ is an interior point of the simplex SM = {p :pi¿ 0; i=0; 1; 2; : : : ; M;∑M

i=0 pi=1};{p :

M∑i=1

ipi=M∑i=1

ip∗i

}∩ �= {p∗}; (5.6)

� contains an open set in SM : (5.7)

Then there are admissible similar tests which are CI.

Proof. A complete su?cient statistic under H0 is Y2 =∑M

i=1 iXi. Now, as in Cohen and Sackrowitz (1987)(CS), let Y′

1 = (X2; X3; : : : ; XM ), let pj =e j ; j=0; 1; 2; : : : ; M;

�j =(j − 1) 0 − j 1 + j; j=2; 3; : : : ; M: (5.8)

Note from Cohen and Sackrowitz (1987) that the conditional distribution of Y1 given Y2 is exponential familywith natural parameter �′1 = (�2; �3; : : : ; �M ). Next recognize that Y2 contains constraint information concerning�1. To see this we have �2 =

∑Mi=1 ipi and when �2 =

∑Mi=1 ip

∗i , the only value of �1 that is possible, by virtue

of (5.6), is (�∗2 ; : : : ; �∗M )

′ where from (5.8) �∗j =(j − 1) ∗0 − j ∗1 + ∗j and p∗j =e

∗j . Next (5.7) implies that

when �2 =∑M

i=1 ip∗i + 2 or �2 =

∑ip∗

i − 2, that the �1(�2) section consists of more than one point and sosome �j’s can take on more values than �∗j .The larger problem to consider is when �=SM . In this case Y2 contains no constraint information con-

cerning �1 since each �j ranges over (−∞;∞).When H0 is true and conditioning is done on Y2, the vector parameter �1 = �01, i.e., in the conditional

problem the null hypothesis is simple. Depending on the dimension M , there can be many tests which areadmissible and similar for both the original problem and larger problem. These admissible similar tests thenare CI.

Remark 5.2. The assumptions in Theorem 5.1 concerning � are surely not necessary conditions. They werechosen in deference to the problem of testing for interference. It is clear that many other types of � setswould give the same result.

Remark 5.3. Another type of example that indicates the delicacy of the similar test phenomena is as follows:Let X1; : : : ; Xk be independent normal with each Xi having a N (�i; 1) distribution. Let H0 : �∈�0, where�0 = {� : �1 = 0; �j ∈ [0; 1]; j=2; : : : ; k}. Let H1 : �∈� \�0 where �= {� : −∞¡�1¡∞; �j ∈ [0; 1]; j=2; : : : ; k}. The complete su?cient statistic under H0 is (X2; : : : ; Xk). The conditional problem depends onlyon X1. The UMPU test, which does not depend on (X2; : : : ; Xk) simply rejects if |X1|¿C. The same test issimilar for the hypothesis H0 : �∈�0 where �0 = {� : �1 = 0; −∞¡�j ¡∞; j=2; : : : ; k} vs. H1 : �∈� \�0where �= {� : −∞¡�j ¡∞; j=1; 2; : : : ; k}. Thus we see that large increases or decreases in the size ofthe boundary set of H0 and H1 need not e;ect the UMPU similar test.

Remark 5.4. The notions of CS test and CI test need not be limited to similar tests. Some extensions to Bayestests can be given. However we focused on similar tests because one is accustomed to seeking unbiased oruniformly most powerful unbiased tests. Such optimal tests require similarity for models in which probabilityof rejection functions are continuous. The results here, connected with similar tests, are felt to be the mostinteresting.

Remark 5.5. Warrack and Robertson (1984) discuss a problem with order restrictions. In their problem thelikelihood ratio test is dominated in power by a test that essentially ignores the order constraints in the nullhypothesis space. The dominating test is not similar so the model of Warrack and Robertson (1984) does not

Page 8: Wherefore similar tests?

290 A. Cohen, H.B. Sackrowitz / Statistics & Probability Letters 54 (2001) 283–290

quite <t into the setup of this paper. Nevertheless, the dominating test is in some sense constraint insensitive.Perlman and Wu (1999) discuss the Warrack and Robertson (1984) paper.

Remark 5.6. Likelihood ratio tests would typically be CS.

Remark 5.7. Tests that are constraint insensitive often times may have undesirable features. One must weighthe pros and cons of competing tests. Furthermore, tests that are constraint sensitive may sometimes havedi;erent undesirable properties. Once again these must be weighed against the bene<ts of the tests andcompetitive tests.

References

Cohen, A., Sackrowitz, H.B., 1987. Admissibility of goodness of <t tests for discrete exponential families. Statist. Probab. Lett. 5, 1–3.Lehmann, E.L., 1986. Testing Statistical Hypotheses, 2nd Edition. Wiley, New York.Ott, J., 1996. Estimating crossover frequencies and testing for numerical interference with highly polymorphic markers. In: Speed, T.,Waterman, M.S. (Eds.), Genetic Mapping and DNA Sequencing, Springer, New York.

Perlman, M.D., Wu, L., 1999. The emperor’s new tests. Statist. Sci. 14, 355–381.Robertson, T., Wright, F.T., Dykstra, R.L., 1988. Order Restricted Inference. Wiley, New York.Warrack, G., Robertson, T., 1984. A likelihood ratio test regarding two nested but oblique order-restricted hypotheses. J. Amer. Statist.Assoc. 79, 881–886.