invariance principles for some stochastic …boos/library/mimeo.archive/isms_1979_1213.pdfinvariance...

24
INVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-ESTIMATORS AND THEIR ROLE IN SEQUENTIAL STATISTICAL INFERENCE by Jana Charles University, Prague, Czechoslovakia and Pranab Kumar Sen 1 Department of Biostatistics University of North Carolina at Chapel Hill Institute of Statistics Mimeo Series No. 1213 February 1979

Upload: others

Post on 13-Jul-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

INVARIANCE PRINCIPLES FOR SOMESTOCHASTIC PROCESSES RELATING TO M-ESTIMATORS

AND THEIR ROLE IN SEQUENTIAL STATISTICAL INFERENCE

by

Jana Jureckov~

Charles University, Prague, Czechoslovakia

and

Pranab Kumar Sen1Department of Biostatistics

University of North Carolina at Chapel Hill

Institute of Statistics Mimeo Series No. 1213

February 1979

Page 2: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

,

INVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING

TO M-.ESTIMATORS AND THEIR ROLE IN SEQUENTIAL STATISTICAL INFERENCE

byv ~

JANA JURECKOVACharles University andPrague, Czechoslovakia

PRANAB KUMAR SENI

University of North CarolinaChapel Hill, NC, U.S.A.

ABSTRACT

Weak convergence of certain two-dimensional time-parameter

stochastic processes related to M-estimators is studied here. These

results are then incorporated in the study of the asymptotic properties

of bounded length (sequential) confidence intervals as well as

sequential tests (for regression) based on M-estimators.

AMS Subject Classification Nos: 60FOS, 62G99, 62L99

Key Words &Phrases: Asymptotic linearity, bounded length (sequential)confidence interval, converage probability, 0[0, 1] space, M-estimator,operating characteristic, sequential tests, stopping number, weakconvergence, Wiener process.

lw~rk' ~f this author was partially supported by the NationalHeart, Lung and Blood Institute, Contract No. NIH-NHLBI-71-224S fromthe (U.S,) National Institutes of Health.

Page 3: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

2

1, INTRODUCTIONc; •

Let Xl~"'~ Xn be independent random variables (rv) with

continuous distribution functions (df) Fl , "', Fn, respectively, all

defined on the real line R. It is, assumed that F.(x) =F(x _I',°c.),1 1

X E:R, i ~l, where F is unknown, cl " .. , cn are given constants

and 1',0 is an unknown parameter. An M-estimatop ~ of ~on [c.f.

Huber (1973)] is a solution (for ~) of the equation

r~=lci 1l!(Xi - ~ci) = 0 .

where 1l! is some specified saoPe funation. Various asymptotic

(1.1)

,..,properties of I', have been studied by various workers; we may refern

vto Huber (1973) and Jureckova (1977) where other references are also

cited. In this context, one encounters a stochastic process

M ={M (~) =t lc.[1l!(Xi -~d.) -1l!(Xi )], ~E:R} (where dl , .•. ,dnn n 1= 1 1

are suitable constants) and the asymptotic linearity of M (~) in ~n

,.., vplays a vital role in the asymptotic theory of I',n [see Jureckova

(1977)]. In the context of sequential confidence intervals for 1',0

oas well as sequential tests for ~ based on M-estimators, we need

to strengthen the asymptotic linearity results to random sample sizes,

and, in this context, some invariance principles for certain two-

dimensional time-parameter stochastic processes related to {M} aren

found to be very useful, With these in mind, in Section 2, we formu­

late these invariance principles for {Mn} and present their proofs

in Section 3. Section 4 deals with the problem of bounded-length

(sequential) cQnfidence i.ntervals for 1',0 based on M-estimators and

some asymptotic properties of these procedures are studied with the

aid of the invariance principles in Section 2, The last section is

Page 4: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

3

devoted to the study of the asymptotic properties of some sequential

tests for ~o based on M-estimators .

2. SOME INVARIANCE PRINCIPLES RELATED TO M-ESTI'MATORS

For technical reasons, in this section, we replace the X. byl.

a triangular array {(Xnl , ... ,Xnn); n ~l} of rv's and assume

that the -following conditions hold.

(A) For every n (~ 1), Xni , i ~ 0, are independent and identically

distributed (i.i.d.) rv with a continuous df F, defined on R.

(8) ~: R+R is nonconstant and absolutely continuous on any

bounded interval in R. Let ~(l) be the derivative of ~ and

(2.3)

(2.2)

(2.1)

F) < 00,

Ioo[~(l) (x)]2dF (x) <00,

_00

2 foo (1) 2 2o <01 = [~ (x)] dF(x) -Yl (1jJ,

assume that

Yl(~' F) =Joo~(I)(X)dF(X) ;0,_00

(i)

~ e (ii)

'l(iii)

_00

(2.4 )

(C) Let {c ., t ~ 0; n ~ O} and {d., i ~ 0 ;nl. nl. n ~ O} be two

triangular arrays of real constants, such that if {kn} be any

sequence of positive integers for which

um{.max c~i/t. 'sk c~.} = 0,n+oo l.sk ) n J

n

um{. max d2

. /t.'<k d2j } = 0,n+oo iSk nl, .)- n n

n

um{max c2id2. /l.'Sk c2.d~.} = 0 .n+oo isk n nl. ) n n) J

n

02 =1. d2 . SO*2 <00 , Y n ~l.n jSkn nJ

k too as n +00,n

then

(2.5)

(2.6)

(2,7)

(2.8)

Page 5: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

4

Conventionally, we let cnG = dnO = 0, V n ~ 0 and let

n ~o (2.9) !

Mk(M =l~ OC l·{1jJ(X . -6d .) -1jJ(X .)}, k ~O, 6 ER.On 1= n nl nl nl (2.10)

Let K*={t, ~): O~t~kj' O~161 ~ki}(= [O,ki]x [-ki,ki]) be any

compact set in R2

(where 0 <ki, ki <00) • We define

Wn = {Wn (t, M, (t, 6) EK} by letting

Wn (t, M = {Mnn(t) (~) - EMnn(t) (6)}/ (<11An), (t, 6) E K*

2 2net) =max{k: Ank ~tAn}' t EKi = [0, ki]; kn =n(ki) •

(2.11)

(2.12)

Then W belongs to D[K*] for every n ~ 1. Finally, let W= {wet, 6),n

(t, 6) EK*} be defined by W(t, 6) = M~ (t) , (t, 6) E K* , where

~ = {~(t), t EKp is a standard Wiener process. Then, we have the

following

Theopem 2.l. UndeP (A), (8) and (C), {Wn } aonvepges weakly to W.

We are also interested in replacing in (2.11), EMnn(t)(s) by

a more explicit function of (t, s). For this, we assume that the

following holds.

(8') In addition to (2.1) and (2.2), 1jJ(1) is absolutely continuous

(a.e.), denote its derivative by 1jJ(2) and assume that

lim foo\1jJ(2)c.x +t) _1jJ(2) (x) IdF(X) = 0 , (2.13)t-.Q

_00

(D) F admits of an absolutely continuous probsbi1ity density function

(pdf) f with a finite Fisher information 1(,£) = f (,£, / f) 2dF

where ft (x) = (d/dx)f(x).

Let then

Y2(1/I, F) • [1/1(2) (x)dF(x) tC1/l(l) (x) I-f' (x) !f(x)}dFC>:] . (2.14)

Page 6: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

..

5

so that by (~,2), (D) and the Schwarz inequality,

Y;C1lJ, F) s I (f) [I1lJUlJ2dF <00. Also, let

~n(t) 2h (t) =1..' 0 c .d . I A ,n .1= nl nl n t e: K~ •

1(2,15)

Note that by C2,8}, (2,12) and the Schwarz inequality,

h2(t) SA-2(~~(t)c2.d2.)(~~(t)d2.)StD*2 <00 V t e:k* C2,16)n - .n 1..1=0 nl nl 1.1=0 nl ' 1 •

Let us now define WO = {WOCt, fl) = {M (t) (M + fly(t/J, F)L~(ot)C .d . }/Ca, A ),n n nn 1= nl nl n

Ct, M e:K*} and W = {W Ct,fl) =WO(t. M - 2162Y2Ct/J, F)h Ct)lal

,- n n· n' n

(t, fl) E:K*}. Then, we have the following.

Theopem 2.2 • .undep CAl, (B'), (~) and (D),

to W and if, in addition,

{Wn

} weakly aonverages

Ii h ( ) h {~WO}m t = 0, V t £ k*l' ten,n~ n n

aonvepges weakly to W.

In some applications, we may encounter a process W* ={W*(t, fl),n n

-e (t, fl) £ K* }, where, defining the M (Mnnas in (2.10) and {nCt) }

,"

as in (2.12), for every (t, fl) E: K*,

W~ (t, fl) = (al An) ... 1{Mn (t)n (tl Cfl) - EMn (t)n (t) (6) } . (2.17)

(~,18)

(2,19)

The weak convergence of {W~} to some appropriate Gaussian function

depends on the interrelationship between the elements of different

rows of the triangular arrays {c.} and {d.}. Suppose thatnl nl-2~n(tAs)

lim A I..i 0 c (t) i d ( ). c ( ). d ( ). =g (s, t) s, t £ K*n~ n = n _ n .5 1 ns 1 n s 1

exists for every (5, t) and is a continuous function of s, t,

o Ss, t SKi. Further, we assume that there exists a positive number

M, such that

~qi OCck·dk · _.c .d .) + L~ lck2

.dk2.J / CA

2k _A

2) SM,

L~ = 1 1 ql ql l=q+ lIn nq

uniformly in 0 S q <k S kn I n ~ 1. It is also possible to replace

C2.l7) - (2,18) by alternative conditions, Then we have the following.

Page 7: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

Theorem 2.:5. Under CA), (B), (~) and (2,18) - (2 t 19) ,

6

{W*} aonvergesn

weak'ly to a Gaussian funati-on W*. where W* = W* (:t, .1), t, t.) EK*

has mean ° and

EW* (~ , .1) W* (t , A.) = At. ,g (5 , t L V (s, .1). (t • .1') £ K* (2,10)

FinaUy, if

Under (A), (B'), (e), CP) and (2,18) - (2.19), in (2,17), we may also

tn(t)rep'laae EMn(t)nCt) (A) by .1Yl(~' F)li=O Cn(t)idn(t)i

1 2 ) tn (t) 2 *- ¥ Y2(~' F li=O cnCt)idn(t)i' for (t,.1) EK •

g (5 , t) = S 1\ t, 'tf (5, t) EKi ' then W*:: W•

In Sections 4 and 5, we shall encounter a single sequence {c., i ~o}l.

and will have cki = c i ' dki = ck/Ck where C~ = L~=lc~i' for k ~ 1.

In such a case, (2,18) and (2.19) are not difficult to verify. In

fact, here, g(s, t) = S 1\ t, Y s, t EKi.

3. PROOFS OF THEOREMS 2.1, 2.2 AND 2.3

First, consider Theorem 2.1. Let us define W~ = {W~ (t, .1), (t, .1) £ K*}

by letting

° tn(t) (1)W (t, .1) =Ml' ° c .d i{~ (X.) -Yl ($, F)})/OlAn ' (t, .1) EK*, (3.1)n l.= nl. n nl.

where net) is defined by (2.12). We like to approximate W by WOon n

Note that for every (t,.1) EK* ,

E{WO(t,.1) - W (t, .1)}2= [tni(to)c2.var{~(x . -.1d .) -~(X .) _ d .~(1)(X .)}]lo2A2n . n . L. = n1 n1, n1 n1 n1 n1 1 n

InCt) 2 (1) 2 2 2S i ° c iE{~(X i-.1d i) -~(X.) -.1d.~' (X .)} lOlA= n n n . n1 n1 n1 n

(3.2)

I* 2 2 2 (1) 2 2 2= [. ('t)c .d i.1 E{['/J(X i-lid .) -~(X .)]/.1d i -~ (X.)) IOIA.-] ,n. nl nn nl n1n 'nl "b

where t~(t) extends, over all B; ~ sn(t) and dni ~ O}, Note that by

(2,'2) and (2,4), for every h >0, f h- 2{'/J(x +h) _~(x)}2dF(X) =_00

Page 8: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

7

Jih.,.If1//1) (x + t)dt] 2dF (x) S flO

h·1 (fhIl/J(l) (x + t)] 2dt)dF(;lC) ..

_00 ° .00 °frl/Jo,)(X)]2dF ()C)«(0) as h ..O! A similar case holds for h <0,

.. 00

Hence, by Fatou "s lemma~

lim(h'"2 [l/J(x +h) ",l/J(x)]2dF (x) = Joo[l/J(l) (x)]2dF (x). (3.3)h~ .

_~ ~oo

Similarily,

lim fooh-1[l/J(X +h) .. l/J(xnl/J(l) (x)dF(x) = foo[l/J(l) (x)]2dF (x). (3.4)h~ .00 _00

From (2.6), (2.8), (3.2), (3.3) and (3.4), we obtain that

lim[E{W (t, 6) _wOrt, 6)}2/t62] = 0. V (t, 6) e:K*. (3.5)h~ n n

By (3.5) and the Chebychev in equality, for every (fixed) m(~ 1) and

(t ., 6 .) e: K* ,IS j s m, as n" 00

J J

max IW (t., 6j ) -WO(t., 6.) 1£"'0.lSjsm n J n J J

(3.6)

tm °Also, by (3.1), for any £= (£1' ... , b ) ;0; /.. 1b.W (t., 6.) =

-m ~ J= J n J Jk (1)

I.noe . . {l/J (X.) -Y1(l/J, F)}/al , where the e. depend on b,1= 1J n1 n1 ~

t l , .•. , tm' 61, ... ,6 " {c.} and {d.} and by (2.5) - (2.8),m n1 n12 kn 2

max e ill ._oe ...0 as n"oo , (3.7)Osisk n J- nJ

n

while the {l/J(I) (Xni

) -'VI (l/J, F) }/a1

are i.Ld.rv with °mean andv.

unit variance. Hence, by a central limit theorem in Hajek and Sidak

tm °(1967, p. 153), we conclude that lj::l1bjWn(tj, 6j ) is asymptotically

normal, Further, by (2.7) and (3.1),

EW~Ctj' AjlW~(:t.v 6t ) "6j 6t (tj Att ) = EW(t j , 6j )W(tt' 6t ), (3.8)

for every j, t=l, ,,,,m. Hence, for every m(~ 1), (tj

, 6 j ) e:K*,

Page 9: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

8

From (3,6) and (~,9), we conclude that the finite dimensional distri­

butions (f,d.d,) of {W} converge to those of W. Hence, to proven

Theorem 2.1, it remains to show that {W } is tight. For this, forn

a process x, we define the increment over a block B = (t: 10 $! $ 11 }

by

x(B) =x(!l) -x(:l:ll' t 02) -x(tol , t 12) +x(!o)' (3.10)

where !j = (t j1 , t j2), j =0, I, Also, let Bo(1o) be the block

{~: 10 st s~o + o,V (0 > 0) • Then, proceeding as in (3.2) - (3.5), we

obtain that

(3.11)

(3,12)

(3.13)

where A(B) stands for the Lebesgue measure of the block Band

~.: In = 0, uniformly in 0(> 0) and (t, t1) E K*.

Thus, from (3.11) and the results of Bickel and Wichura (1971), we

conclude that {W - WO} is tight, Hence, it suffices to show thatn n

{WO} is tight. Toward this, we note that wO (BAt, t1)) = 0 [W (t + 0) - W (t)],n nun n

where

(\'n(t) (1) )W (t) = L' ° c .d .{ljJ (X.) -Y1(ljJ, F)} lalA , t EK*l'n 1= n1 n1 . n1 n

. 2 ° 2 3Hence, E[Wn(t) -Wn(S)] s(t-s), so that E[Wn(Bo(t, t1))] $Q =

[A(Bo(t

1t1)]72, for every 0 >0 and (t, t1) EK*, and hence, the tightness

of {WO} follows from the results of Bickel and Wichura (1971). Q.E.D.n

Page 10: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

9

To prove Theorem 2.2, we note that for every (t~ 6) E:K*,

-11 ( . ~n(tl 1 2. ~n(t) 2 IA EM (t) M + I1Yl(l/J, F)L' 0 c .d i -~ Y2 (l/J, F)L' 0 c i d .n nn .1= n1 n 1= n nl.

=A-1n:~(t)c .E{l/J(X . -l1d .) --l/J(X ) tl1d .l/J(l)(X .) _~2d2.l/J(2)(X .)}In 1=0 n1 . n1 nl ni n1 . n1 2 n1 n1

(0<8<1)

~¥2D*t sup r'l/J(2 l (x +h) _l/J(2) (x) IdF(x)h'lhl~6d*. n _00

(3.14)

..

where d* = max Id . I 0+ 0 as n 0+00. Hence, by (2.6), (2.13) andn l<'<k n1-1- n

(3.14), sup{lw (t~ 11) -W (t, 11)1: (t,l1) £ K*} ~ ° as n-+ oo , son n

that Theorem 2.2 follows from Theorem 2.1.

For Theorem 2.3, in (3.1)~ we take for every (t, 11) E:K*

W~o(t, 6) =6(Li~~)cn(t)idn(t)i{l/J(l)(Xni)-Y1(l/J, F)})/al An •

(3.15)

Then, as in (3.2) - (3.5), lim E{[W*(t, 6) -w*o(t, 11)]2} =0, V (t, 11) E:K*,n-+oo n n

I *0 pand hence, as in (3.6), max w*(t

j, 6.) -w (t., 11.) _0 as n-+ oo •

l~j~m n J n J J

*0The convergence of f.d.d.'s of {W } to those of W* followsn

[under (2.18)] as in (3.7) -.(3.9). Further, in (3.11), we may replace

Wn

and W~ by W; and W:o, respectively; (2.19) insures the same

inequality. Finally, by (3.10) and (3.15), for every (t, li) E: K* ~

W* (B .1' ( t, li1) = <5 [Q* (t + t5) - W* (t)] ; (3 •16)nun n

~ (t) = (L~~~) cn(t) i dn (t) i {l/J(l) (Xni ) - Y1 (l/J, F)}J/a1An· (3,17)

A* ~*( 2 -2{~n(t) 2 2 ~n(s) 2 2Note that E[Wn(t) -wn·{s)] =An Li=O cn(t)idn(t)i +£1=0 cn(s)idn(s)i

- 2Li~~i\s)cn(t)idn(t)iCn(S)idn(S)i} which, by (2.19), is bounded from

Page 11: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

10

above by M(t - sL V t ~s. Thus, E[W~(BO(t, 6))]2 ::;M03 =M[A(Bo(t, 6))]%

for every (t, 6) EK* , 0 >0, and this insures the tightness of {W:O}.

Q.E.D.

4, SEQUENTIAL CONFIDENCE INTERVALS FOR 60 BASED ON M~ESTIMATORS,

"

Let us consider the simple regression model: o 0Xi =6 c. + X. ,

1 1i ~ 1,

where the X~ are i.d,rv .with a continuous and symmetric df F,l.

defined on Rl , the c. are known regression constants and we want to1

provide a bounded-length confidence interval for the unknown parameter

60, Our procedure rests on the M-estimators [viz, (1.1)]. Since F

is not specified, no fixed-sample size procedure exists and, therefore,

we take recourse to sequential procedures.

Let us define C~ = r~=lc~ and assume that

lim n-lC2 =C2 exists (0 <C <00) ,n-+co n

lim{ ~x c~/C~} = 0n-+co ls1sn

Define an M-estimator of 60 as

(4,1)

(4.2)

.,t. = l.(~(l) + 6(2)) .n 2:n n '

~(l) = sup{a: S (a) = ~ ci1Ji(x - ac.) >o}, 6(2) =inf{a: S (a) < O},n n il 1 n n= (4.3)

1Ji is non-constant, nondecreasing and skew-symmetric on

is "'" in a ERl , and hence, 6 exists), and, further,n

We assume that

R1

(=> Sn (a)

we assume that

foo

2 20<00 = 1Ji (x)dF(x) <00 •

_00

(4.4)

v '"Then [c,f. JureckQva 0977)], as n -+00,

(4.5)

Page 12: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

where defining Y1(W, F) by (2.1l,

v($, F) =(lo/Yl ($, F).

11

(4.6)

Thus, if v(W, F) is specified, then for a given de> 0), one can

define a desired sample s~ze nd by letting

(4.7)

(where Tal 2 is the upper 50a% point of the standard normal df) and

take

I* =[8 -d, '6 +d],nd nd

nd

so that by (4.5), (4.7) and (4.8), we have

lim P{80 EI* } =1 -a (0 <a <1) •d....O nd

(4.8)

(4.9)

Thus, for small d, I* provides a bounded length confidence intervalndfor 80 with asymptotic coverage probability 1 -a. In our case,

neither F nor V(W, F) is specified, and hence, the procedure in

(4.7) - (4.9) is not usable. For this reason. we take recourse to the

Chow-Robbins (1965) type sequential procedure. [See GIeser (1965) for

the sequential least squares procedure and Ghosh and Sen (1972) for the

sequential rank procedure.] Let us define

2 -l~n 2 ~ { -l~n A}2$ =n /.. lW (Xi -~ c.) - n /.. ll/J(X. - 8 c.) ,n 1= n 1 1= 1 n 1

and

~L =sup{a; S (a) >T /2C s },n n _. ann

8U-}l· -1¢{a: S (a) < -T 12C s }.~ . nan n

Also, we assume in addition that

limE{ sUJ? [l/J(X~ - t) -l/J(X~)]2} = 0h ....O t : It Ish 1 1

V '"Then [c.f. Jureckova (1977)], it follows that

(4.10)

(4.11)

(4.12)

Page 13: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

12

(4,13)

[Actua11Y1 by (~,2) and the Kintchine law of large numbers l as n~~1

-1~n r 0 . r ~n 'i=~111 (Xi) -+. .111 ()C)dF(x) a~s'1 for r = 1, 2 (4,14)

Let w* =ma~{1(6 -tp)cil: 1 si :!>n}(s C 16 - 110 1 max Ic 11 wheren n n n 1<' < n_1_n

c . =ci/C 1 1 si sn), Then 1 by (2,5), (4.2) and (4,5)1 w* ~ 0n1 n n

as n ~~. On the other hand, [w* < E] (£ >0) insures thatn

where

r I r 0 r o· Iw ICE) = sU1 111 (X. -t) .. $ (X.) , l:;;i :!>n, (r = 1,2)n t It < 1 1

: -£ . (4.16)

are i.i.d.r.v. By (4,12) and (4,16) along with Markov inequality,

for £ >0 arbitrarily small, the right hand side of (4,15) can be

Hence, by (4.14) and (4.15),made arbitrarily small, in probability.

2 n 2sn ~ °0 , as n ~OO, (4,17)

(4,19)

(4.18)

Note that by (4,18),

show that P{6° £ IN } ~ 1 - ad

asymptotic properties of

:r has width s 2d, while,it remains toNdas d ... O. Our interest centers around the

=min{n ~nO: Ta/2

en :!>dCn }

where nO (~2) is an initial sample size and we propose

C\ 0 ~

IN =: Itt N :!> A s ~ N ]d ' d ' d

as the desired confidence interval for 11°.

v ...while the rest of the proof of (4.13) follows as in Jureckova (1977)].

With (4.7) - (4.9) and (4.13) in mind, we consider a sequential

procedure where a stopping variable Nd is defined by

Nd =min{n ~nO: ~u n .. .8L,n s 2d}

Page 14: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

13

The01'em 4. t, Unde1' (4,1), (4,2), (4,4) and (4.12), as d +0,

Nd/nd +1, in p1'obabiUty~

2 'D 2 2 2d Nd ....... e =[Ta/ 2,,(IjJ, F1/C] = lim (d nd) C< (0) ,

d+O

(4.20)

(4.21)

(4.22)

Remark; In the Chow~Robbins (1965) procedure, (4.20)~(4.2l) have been

established up to an a.s, convergence as well as convergence in the

first mean; parallel results for rank estimates are due to Ghosh and

Sen (1972), But, the later authors needed considerably stringent

regularity conditions on the score functions for such stronger results.

Here also, under stronger regularity conditions on the c. and 1jJ,1

such a,s. results can be established. Hcwever, we do not intend to

pursue these results.

Proof of Theorem 4,t. For every £ >0 and d >0,

Then, by (4.18)

p{Nind >1 +d = P{Nd >n~£}

= p{i_ -~L >2d, V nO ~n ~nod } ~P{6-U,n ,n £ °U,nd£

....- b. > 2d}

°L,nd£

(4.23)

t

By (4,1), (4.7) and the definition of n~£, 2dC 0 +(1 +£)~2Ta/2,,(IjJ, F) >ndE

2T a/2,,[IjJ, F), so that by (4.13) and (4.23), p{Nd/nd >1 + d + 0 as

d +0, Similarly, for every E >0, p{Nind < 1 + d + 0 ad d +0, and

hence, [4,20) holds. Then~ (4.21) follows from (4.20), (4.1) and (4.7).

TO prove (4,22), we define z = {Z (t) =S (6o)/OoC, t £K*l '

n n net) nwhere

Page 15: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

(4.24)

14

o 0t E K*l' Since X, =X, ... A c

i• i ~ 1 are

1 1

i.i.d.r.v., by the same technique as in the proof of Theorem 2.3, it

follows by some routine steps that under (4.1), (4.2)~ (4.4) and

(4.12), {Z} converges weakly to a standard Wiener processn

S = {set), t EKP. This insures that

sup Iz (t) 1=0 (1) and lim sup Iz (t) -Z (5) I =0, in prob.tEKi n p 6+0 0<sstss+6sl n n

Also, we now appeal to Theorem 2,3, where we take ck ' =c, and1 1

for 1 sis k, k s k, Thenn

Finally, by (4,1), (2.18) reduces to g(s, t) =5 At, V 5, t EKi.

Hence, from Theorem 2.3, (4.3) and (4.23), we conclude that for every

so that

n -+00 ,

(4.25)

(4,26)

to>O (O<to<ki),

sup C I~ (t) - AOI =0 (1) , (4.27)t <t<k* n n p0- - 1

sup I -1 2 ~ 0 I ptostski Cn Cn(t) (An(t) -A) +Zn(t):V(IlJ, F) --0, (4.28)

sup IC- l {C2 ( ) (At ( ) _Ao) _c2

( ).~(sf'·AO)}1 P-+O as <HO. (4,29)toss<tss+6Ski n n t t n s

Similarly, by Theorem 2.3, (4.11), (4,12), (4,24), (4.26), (4.27) and

(~. 28),

t~Ut~k*lc~lCnCt){Cn(t)(AL,n(t) -6n(t~+TCt/2V(IlJ, F)}I £...0, (4.30)

0- - 1

to~~ki Ic~ICn(t) ICn(t/Au,n(t) • An(t)) • Ta / 2V('i>, p)}1 !'.. 0, (4,31)

1...1 2·~ 0 2 A 0 I p

sup * Cn {Cn(t) (AL,n(t) - A ) "Cn(s) (AL,n(s) - A )} --+ 0 (as 0 +0)toss<tss+oSk1 (4.32)

Page 16: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

15

where by

By virtue of (4.20), (~.32) and (4.33), as d ~o,

-11 2 A ~ 2 A ACn {CN (A. - N· - 6L N ) - C (Au - 6L )} I

d d --U, d ' d nd lJ,nd ,nd

(4.30) .. (4.31), as d ~o ,

C (~- .. AL ) t. 2T /2v(I/J, F).nd lJ,nd ,nd ex

Also, from (4.20), (4.28) and (4.29), as d ~o,

L.o. (4.34)

(4.35)

"" 0 .p A 0 VCN (~ - I!. )1 v(I/J, F) '" C (6 - 6 )/v(I/J, F) -Zn' (1) ,

d d nd nd d

where Z (1) is asymptotically (as d ~ 0) N(0, 1). Finally, bynd

(4.32) - (4.33), (4.20), as d ~o,

(4.36)

Thus,

-11 2 A "" 2 "" A I pC {CN (6u N .. A._) .. C (l:lu - 6 )} -+- 0 .nd d ' d ~d nd,nd nd

A pV

N-+- v(I/J, F) as d ~o and hence, by (4.36) and (4.37),

d

(4.37)

lim p{1!.0 £ IN } = 1 - ex .d~O d

This conpletes the proof of the theorem.

Note that by (4.13) and (4.30) - (4.31), for every to >0, as

n -+00 ,

I ..1 {A } Psup C C (t) v (t) - V(I/J, F) - 0 •

t <t~·* n n n0- ~""I

We now appeal to Theorem 2.3 to present an invariance principle

(4.38)

(4.39)

relating to estimators of 'Yf(W, Flo Note that by (4.6), (4,13) and

(4.17), we have

"" "" ""B =5 Iv =2T /25 IC (A -6L )n n n ex n n 'U,n ,n

(4.40)

We intend to study the asymptotic behavior of {Bn(t)" 'Y1 (I/J, F), t E Kil·

Page 17: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

(4.43)

(4.41)

p~o , (4.44)

16

For this, we note that by (4,14), (4,15), (4,16) and (4 1 27), we may

improve (4,17) to the following: for every to >0, as n -+ 00 ,

· . I 2 21 p 0. ~sup s 't) - 0"0 -+ ,tostski nu

where net) is defined by (2,12). [Actually, in (4.15) - (4,16), we

may replace w; by Sup{wn(t): tostski} which by (4. 27) ~ 0

as n -+00]. From (~,39), (4.40) and (4.41), we obtain that as n -+00,

sup *lc~lCn(t){Bn(t) -Yl(l/J, F)}I ~o, lJ to >0 . (4.42)tostSk1

This, in turn, insures [by (~.20)] that

BN - Y1 (l/J, F) ~ 0, as d +0 •d

To obtain results deeper than (4,41) and (4.43), we note that

by virtue of Theorems 2,1 and 2,3 (where as in (4.24) - (4,37), we

take cki =ci and dki =ck/Ck , 1 si sk; k ~1) and (4.30) - (4.31),

for every to >0,

sup {I {W*(t,t stsk* no 1

~ 0 ~ 0where an(t) =Cn(t) (6L,n(t) -6), bn(t) =Cn (t)(6U net) -6) and

c = Ta

/ 2V(l/J, F). Also, by Theorem 2.3, as n -+00,

{W;(t, an(t) -W;(t, an(t) +2c), to st Ski} E.... 2c{s(t), to st Skp.(4.45)

From (4.44) and (4.45), we have [by (4.11)]

1 2..... ~

O"lAn

{ [2Ta./2Cn(t)sn(j:.) "'Yl (l/J, F)Cn(tJ C.AU,n(t) - 6L,n(t))]' to st Skp

~ 2Ta

/ 2V(W, F){s(t), to St Skp , (4.46)

where we assume the regularity conditions of Theorem 2,2 and further

that defining hn

by (2.14),

limhn(t) =0. V tostski.n-+co

(4.47)

~I

Page 18: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

17

By using (4,39), (4 ~401 and (4,46), we have

{[Cn (t)Cn (t/V(1/J, F)O'lAnlIBn(t) ""Y1 (1/J, F)). to st SkpV-+ {~(t), to st ski}

(4.48)

(4.49)

so that from (4.46) and (4,48), we obtain that as n-+ oo,

{cur L~=IC1]-~cn(t)(Bn(t) -Y1 (1/J, F)), to st SkP~ {O'l~(t), tO:St :SkiJ,

(4.50)

for every 0 < to <ki < 00, From (4,21) and (4.48), we conclude that as

d ~O

2 tnd 4-~C [L.' 1c .] {BN -Y1 (1/J, F)}/O'1 -+- N(O, 1).nd ~= ~ d

Let us now define

(4.51)

(4.52)02 -ltn 2 0 1 tn 0 2s =n L.' 11/J (X. -f!. c.) - (- L.' 11/J(X. -f!. c.)) , n ~l.n 1= 1 1 n 1= 1 1

By (4.1) and assuming that ~1/J4(X)dF(X) <00, we may repeat the proof_00

of Theorem 2.1 and show that for every e: > 0 and K < 00,

(4.53)

(4.54)

Also, from (4.46), as n-+oo

iCy1 (1/J, F) /O'lAn){Cn (t) [sn (t) lao - Vn (t) Iv(1/J, F)]}, to:S t :S kp

V-{~(t): tostski}

By (4,1), (4,53) and (4,54)~

{(rI (1/J, F)/O'IAn){Cn(~) [s~Ct/ao "'VnCt )!V(1/J, F)]}, tO:St :Skil L{~(t), to:St Skp (4,55)

Now {nCn "" 1) ..ls~2, n ~2} is a sequence of U-statistics of degree 2

and the weak convergence of partial sequences to Wiener processes

Page 19: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

18

follows from Miller and Sen (1972), If we let

2 [00 4 .. .(2 .20'2 = l/J ex) dF (x) ... C .l/J (x)dF (x)) ,

"",co "",00

(4.56)

then using the decomposition (2,181 and (3.1) - (3,5) of Miller and

Sen (1972) along with the proof of our Theorem 2.1, it follows that

{ 2 02 2(when (4.1) holds) (Cn(t) [Sn(t/OO -1]/Cn02, Zn(t), t EKil weakly

converges to {(~1 (t), ~2(t)), t EKil where Zn is defined after

(4.23) and sl and ~2 are independent copies of a standard Wiener

process. Thus, if we assume that

. (~n 4) 2 2hm L'=lc . /C =Con-+<lO 1 1 n« (0), (4.57)

then from (4.55)~ and the above discussion, it follows that an n~oo

{(y1 (W, F) /°1CO){Cn(t) [Cn (t)/V(W' F) -1], to ~ t S kil

V-+ {~ -~. "'1 (t) + (y1(l/I, F) ° 2/2°1COlt s2 (t), to S t ~ kp,

for every 0 <to <ki <00. By (4.20) and (4.58),

A *2C

n[VN /v(l/I, F) -1] ... N(O, a ),

d dwhere

(4.58)

(4.59)

(4.60)

2 2*2 COOl 1 2° = 2 [1 +402] •

y1 (l/I, F)

Note that by (4.7) and (4.18), as d +0, V n ~nO

P{Nd >n} =P{Cm/V(l/I, F) ~Cm/Cnd' V nO ~m ~n} , (4.61)

and by (4.21), (4.61) converges to 1 or 0 according as n/nd converges

to a limit less than or greater than 1. If we let n=nd +o(n~), the

right hand side will have a limit different from 0 and I, In fact,

using (4.32), (4,33), (4.56), (4.59) and (4.60), it follows from the

above that as d +0,

Page 20: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

19

(4.62)

where ~ is the standard normal df and Co and a* are defined by

(4,57) and (4,60), This leads us to the following

Theorem 4.2. Under l4,l), (4.2), (4,4), (4.12), (4,56) and (4,57),

lim P {d,"l(Y'Nd/nd .. 1) sy} = ~(yCO/a*), V y ER,dfO 1::::.

0

where nd, Nd are defined by (4.7) and (4.12) and CO' a* by (4.57)

and (4.60).

5. SEQUENTIAL TESTS FOR 1::::.0 BASED ON M-ESTlMATORS

As in Section 4, we consider the regression model: o 0X. =1::::. c. +X.111

where 6. is specified. [In (5.1), we may let HO: 1::::.0 = 1::::.

0for any

specified 1::::.0 ' But, then, working with X. -I::::.Oc.,.11

i ~ 1, we reduce it

to (5.1).] Some sequential test for (5.1) based on linear rank

statistics and derived estimators have been considered by Ghosh and

Sen (1977). Because of the close relationship of M- and R-estimators,

led by their motivation, we may consider the following sequentiaZ

M-test.

~n 1As in (4.3), let Sn(;l) =[,i=ICi$(Xi -aci ), a ER, n ~l and

define s~ and Bn as in (4,10) and (4.40), Let then (0 <) ett' et2 « 1)

(0 <etl ~CX2 <1) be t~e desired type I and type II error probabilities.

Consider two numbers B~ et2/ (l ~ al )) and A(s (l - et2) letl ) (so that

o <B <: 1 <A <(0) and define a = logA, b = 10gB (O' 00< b< a< (0), Then,

we st~rt with an initial sample of size nO(=nO(l:::.» and continue sampling

as long as

Ds2 <6.8 S (k) <n n n 2-(5.2)

Page 21: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

20

Define the stoppinfj va.ria,bZe NC=N(A)) oy

A8 S (~2l)/s2 ~ (P, a)}n n n (5,3)

(we allow NCAl = +00 if [5,2) holds for all n ~ nO CA) ). Then, we

stop sampling after having NCA) observations and accept HO: 60 = 0

o 1 2or HI: 6 = M> 0), according as 68N(6)SN(6) (¥) is S bSN(6) or

> 2- aSN(6) ,

Note that for every fixed 60 and M> 0) ,

P {N(6) > n}=P {bs2 <68 S (k2

) <as2 , V nO(6) smsn}60 AO m m m m

S P {bs2 <68 S (;.821

) <as2}

60 n n n n

{ 2 1 0 2}= Po bs < l:::.B S (;.82 - 6 ) <as .n n n n (5.4)

When

where

Hence,

60 = ;.821

, S (O)/OOC is asymptotically N(O, 1) [see Section 4]n n

Cn -+co as n -+ co , while 8n ~ YI (ljJ, F) (finite) and s~ £.... o~.

(5.4) converges to 0 as n -+co. If d =..60 +¥ >0, then [as in

e,Section 4, $ t and Sn(a) is \ in a] for every K(O <K <co), there

exist an n*, such that

(5.5)

by Theorem

p8n --+ Y1 ($, F)

of (5.4)

d ~ K/C , V n ~ n*n

which insures that S (d) sS (K/C ), V n ~n*, and hence,n n n-1

2.1-2.2, Cn [Sn(K/Cn) "Sn(O)]-+ -KYI ($, F) as n-+ oo , while

2 p 2and sn -..... 00, Since KCnY1(W, F) -+ co, the right hand side

again converges to 0, A similar case holds for d <0. Thus,

(5.6)lim P {N(A) >n} = 0 ,n-+oo AO

so that the process terminated with probability 1,

We like to study the OC function of the proposed procedure.

As in Ghosh and Sen (1977), we take recourse to the asymptotic case

where we let A-+ 0, We assume that

Page 22: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

21

AO = 4>ll where ~ £ I ={u: lul:s; K} for some K > I, (5.7)

[Note that for fixed 40 (j 0), the OC will approach to 1 or 0 as 6 +0,

according as AOis < or > 0,] Further I we assume that. nO (ll) +00

such that

lim nO (s) = 00 but lim ll2no (ll) = 0 ,ll+O ll+O

(5.8)

,e

Pinally, all the regularity conditions of Theorem 4.2 are assumed to

be true here. Let Lp (4), ll) be the OC function of the sequential

test in (5.2)-(5.3) when P is the underlying df and llo =4>ll. Then,

we have the following

(5.9)

Thus, the asymptottc straength of the test is (L(O) =l-al , L(l) =a2

)

fora aU P.

Proof. POl" an arbitrary £(> 0), we define stopping variables £N•. (ll)1.J

as the smallest positive integer (~nO) for which

i 2 1 2 .b(l + (-1) £)00 <llYl (l/J, P)Sn(~8) <aoO(l + (_1)£) (5.10)

is not true, for i, j =1, 2 and denote the associated OC functions

by Lij ,p C4>, ll), i, j = 1, 2, Then, by virtue of (4,41) and (4.42), for

every £ >0, there exist an II (say, llO >0), such that for all 0 < II :s; llO'

Thus, i~ in (S,IO) we let £+0 and denote the limiting stopping

~ variable and OC functions by NOCM and 0 AL respectively, thenLp (4),

it suffices to show that (5,9) holds for 0 M, Towards this, weLp (4),10

e let nL\ =min{n ~nO(~): L\2C2 ~ l}, II > 0 and define {Z } as in afternL\ n

Page 23: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

(4.23) •

22

Then, by steps similar to in (4.24)-(4,29)1 we conclude that

as I:>. -"0, for every O>e:>k*<oo1 '

when

(5.12)1 1 2 Psup IMS(¥)-S.... (<1>1:>.) +(<I>-2)I:>.Y1(l/J, F)C.... }~ 0,

e:~t~i iiI:>. (t) nl:>. (t) nl:>. (t)

Z ={Z (t) =S C<I>I:>.)/00C 1 e: st ~k*l}~ {~(t), e: st Sk*l} (5.13)nl:>. n/i ~I:>. (t) nl:>.

where

nl:>.(t) =max{k: C~ stC~}

Thus, when 1:>.0 = <1>1:>. ,

(5.14)

e: st Skil~ {Zn (t) - (<I> - ~)t/v. (l/J, FL e: st Skil·I:>. (5.15)

By (5.13), (5.15) and the results of Dvoretzky, Kiefer and Wolfowitz

(1953), the desired result follows.

Under more restrictive regularity conditions on the score function,

for sequential rank tests, Ghosh and Sen (1977) have studied the limit

of 1:>.2 EN (I:>.) (as 1:>.-..0). Similar limit in our case also demands more

restrictive conditions on l/J and the c ..1

However, under (4.1),

the limiting distribution of !:>.N(I:>.) is the same as that of the first

1exit time of {~(t) + (<I> - 2)tJ'v (l/J, F), t ~ e;} when we have two

absorbing barriers b",~,F) and aV (l/J ,F) •

We conclude with a remark that very recently Carro11(1977) has studied

the asymptotic normality of stopping times based on M-estimators where he

needs the assumption that the score function is twice bounded1y continuously

differentiable except at a finite number of points and that F is Lipschitz

of order one in neighbourhoods of these points. Our Theorem 4.2 remains valid

under weaker conditions and also the conclusions are valid for a more

general model.

Page 24: INVARIANCE PRINCIPLES FOR SOME STOCHASTIC …boos/library/mimeo.archive/ISMS_1979_1213.pdfINVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING TO M-.ESTIMATORS AND THEIR ROLE

23

REFERENCES

BICKELl P,J, and WICHURA, M,J, (1971). Convergence criteria formultiparameter stochastic processes and some applications.Ann~ ·Math, $tatiBt, 42, 1656.1670.

CARROLL 1 R.J. (1977). On the asymptotic normality of stopping timesbased on robust estimators, Sankhya~ Serf A. 39~ 355~377,

CHOW I Y.S. and ROBBINS 1 H. (1965), On the asymptotic theory of fixed­width sequential confidence intervals for the mean. Ann. Math.Statist, 36~ 457-462.

DVORETZKY 1 A., KIEFER, J" and WOLFOWITZ, J. (1953). Sequentialdecision problems for processes with continuous time parameter.Testing' hypotheses, Ann. Math, Statist. 24, 254-264.

GLESER, L.J. (1965). On the asymptotic theory of fixed size sequentialconfidence bounds for linear regression parameters. Ann.Math. Statist. 36, 463-467.

GHOSH, M. and SEN, P.K. (1972). On bounded length confidence intervalfor the regression coefficient based on a class of rankstatistics. Sankhya~ Ser, A~ 34, 33-52.

GHOSH, M. and SEN, P,K. (1977). Sequential rank tests for regression .. Sankhya, Serf A. 39~ 45-62.

HAJEK, J. and SIDAK, Z. (1967). Theory of Rank Tests, AcademicPress, New York.

HUBER, P.J. (1973) •Monte Carlo.

Robust regression: Asymptotics, conjectures andAnn, Statist. Z~ 799-821.

v .,JURECKOVA, J. (1977), Asymptotic relations fo M-estimates and R-

estimates in linear regression model. Ann. Statist. 5,464-472 •

MILLER, R.G. and SEN, P.K. (1972). Weak convergence of U-statisticsand von Mises' differentiable statistical functions. Ann.Math. Statist. 43, 31-41.