estimation of a common mean of several univariate inverse gaussian populations

11
Ann. Inst. Statist. Math. Vol. 43, No. 2, 357-367 (1991) ESTIMATION OF A COMMON MEAN OF SEVERAL UNIVARIATE INVERSE GAUSSIAN POPULATIONS MANZOOR AHMAD 1., Y. P. CHAUBEY 2. AND B. K. SINHA3 1Department de Mathematique et Informatique, Universitd du Qudbec h Montrdal, CP 8888, Suc. A, Montreal, Quebec, Canada H3C 3P8 2Department of Mathematics, Coneordia University, Montreal, Quebec, Canada H3G 1M8 3 Department of Mathematics and Statistics, The University of Maryland, Baltimore, MD 21228, U.S.A. (Received July 1, 1989; revised April 2, 1990) Abstract. The problem of estimating the common mean ~ of k indepen- dent and univariate inverse Gaussian populations IG(#, Ai), i = 1,..., k with unknown and unequal A's is considered. The difficulty with the maximum like- lihood estimator of ]~ is pointed out, and a natural estimator/5 of/~ along the lines of Graybill and Deal is proposed. Various finite sample properties and some decision-theoretic properties of ~ are discussed. Key words and phrases: Inverse-Gaussian population, Craybill-Deal type es- timate, squared error loss, equivariant estimator, admissibility. 1. Introduction Although the importance of an inverse Gaussian distribution as a mathemati- cal model for analyzing positively skewed data emerged from the pioneering work of Tweedi (1957a, 1957b), it received an enormous amount of attention since the publication of the review article of Folks and Chhikara (1978). Useful applications of the inverse Gaussian distributions have been demonstrated in the works of Shep- pard (1962), Hasofer (1964), Lancaster (1972), Banerjee and Bhattacharya (1976) and Whitmore (1979, 1986a, 1986b). We refer to the recent book by Chhikara and Folks (1989) for various properties and applications of the inverse Gaussian distribution. An inverse Gaussian distribution has striking similarities with and provocative departures from a normal distribution. For example, if X -, IG(#, A), where 1~ = E ( X ) and #3/A --- V(X), and X1,..., X~ are iid ~ X, then X and U, where )( = n -1 ~-~X~ and U -- ~-~(Xi -1 - ~7-1), are jointly minimal sufficient for # and .~, complete and also independent. Moreover, X IG(#, n)~) and AU 2 * This research was partially supported by research grants #A3661 and #A3450 from NSERC of Canada. 357

Upload: manzoor-ahmad

Post on 06-Jul-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Estimation of a common mean of several univariate inverse Gaussian populations

Ann. Inst. Statist. Math. Vol. 43, No. 2, 357-367 (1991)

ESTIMATION OF A COMMON MEAN OF SEVERAL UNIVARIATE INVERSE GAUSSIAN POPULATIONS

MANZOOR AHMAD 1., Y. P. CHAUBEY 2. AND B. K. SINHA 3

1 Department de Mathematique et Informatique, Universitd du Qudbec h Montrdal, CP 8888, Suc. A, Montreal, Quebec, Canada H3C 3P8

2Department of Mathematics, Coneordia University, Montreal, Quebec, Canada H3G 1M8 3 Department of Mathematics and Statistics, The University of Maryland, Baltimore,

MD 21228, U.S.A.

(Received July 1, 1989; revised April 2, 1990)

Abstract . The problem of estimating the common mean ~ of k indepen- dent and univariate inverse Gaussian populations IG(#, Ai), i = 1 , . . . , k with unknown and unequal A's is considered. The difficulty with the maximum like- lihood estimator of ]~ is pointed out, and a natural estimator/5 of/~ along the lines of Graybill and Deal is proposed. Various finite sample properties and some decision-theoretic properties of ~ are discussed.

Key words and phrases: Inverse-Gaussian population, Craybill-Deal type es- timate, squared error loss, equivariant estimator, admissibility.

1. Introduction

Although the importance of an inverse Gaussian distribution as a mathemati- cal model for analyzing positively skewed data emerged from the pioneering work of Tweedi (1957a, 1957b), it received an enormous amount of attention since the publication of the review article of Folks and Chhikara (1978). Useful applications of the inverse Gaussian distributions have been demonstrated in the works of Shep- pard (1962), Hasofer (1964), Lancaster (1972), Banerjee and Bhattacharya (1976) and Whitmore (1979, 1986a, 1986b). We refer to the recent book by Chhikara and Folks (1989) for various properties and applications of the inverse Gaussian distribution.

An inverse Gaussian distribution has striking similarities with and provocative departures from a normal distribution. For example, if X -, I G ( # , A), where 1~ = E ( X ) and #3/A --- V ( X ) , and X 1 , . . . , X~ are iid ~ X, then X and U, where )( = n -1 ~-~X~ and U -- ~-~(Xi -1 - ~7-1), are jointly minimal sufficient for # and .~, complete and also independent. Moreover, X I G ( # , n)~) and AU 2

* This research was partially supported by research grants #A3661 and #A3450 from NSERC of Canada.

357

Page 2: Estimation of a common mean of several univariate inverse Gaussian populations

3 5 8 M A N Z O O R A H M A D E T A L .

These results are analogous to those for a normal distribution. However, unlike in the normal case, in general an arbitrary linear combination ~ a iX i does not follow an inverse Gaussian distribution, and while # and A easily admit UMVUE's, the variance #3 /A does not (Korwar (1980), Iwase and Seto (1983)). This is a point of departure from the normal model. For some other similarities with and departures from the normal case, we refer to Letac et al. (1985), Pandey and Malik (1988), Bravo and MacGibbon (1988), Pal and Sinha (1989) and Hsieh et al. (1990).

In this paper we consider the problem of estimation of the common mean # of several inverse Gaussian populations with unknown and unequal variances. It should be noted that the analogous problem of the estimation of common mean of several univariate normal populations with unknown and unequal variances has been extensively studied in the literature (Graybill and Deal (1959), Brown and Cohen (1974), Khatri and Shah (1974), Bhattacharya (1980) and Nair (1986)).

Let { X i j , j = 1, 2 , . . . , h i} be an iid sample from an inverse Gaussian dis- tribution I G ( p , Ai), i = 1, 2 , . . . , k where # > 0 is an unknown common mean and A1,. . . , Ak > 0 are unknown and unequal. For the estimation of common mean p when A's are known, it can be easily shown that ti = }-~.¢if(i is the UMVUE of # where ¢i = n i A i / Y ~ n j A j , i = 1, 2 , . . . , k. We study this prob- lem for the case of unknown A's. The set of mutually independent statistics {(X~, U~), i = 1, 2 , . . . , k} are jointly minimal sufficient, where

Zi ~- 1 E x i j ' Si = E ( X i - j l . X / ~ - I ) , rt i

J J

i - - 1 , . . . , k .

As noted before, f i i IG(# , ni,~i) , .~igi 2 "~ Xu~, ui = ni - 1, i = 1, 2 , . . . , k. The joint distribution of Jf = (21, -~2, . . . , Xk) and U = (U1, U2, . . . , Uk) is not complete, and thus it does not permit the existence of a UMVUE of ~t.

In Section 2 we discuss estimation of p by the method of maximum likelihood, and point out some difficulties with this method. In Section 3 we consider the unbiased estimator/5 = ~ q~i)?i, where

¢i = ¢i(U1, . . . , Uk) ---- oliUi-1/ ~ c t jU; 1, J

~ i = n i ( n i - 1), i = l , . . . , k .

This estimator is similar to the one proposed by GraybiU and Deal (1959) for the estimation of a common mean of k normal populations. We refer to ~ as the Graybill-Deal type estimator of # and give two alternative expressions for its variance, one in the form of an infinite series (as in the normal case given by Khatri and Shah (1974) and Nair (1986)) and the other as a finite linear combination of incomplete beta integrals. The latter is more convenient for actual computation of the variance using the standard subroutines for integration. An unbiased estimator of the variance of ~ based on )( and U for the case k > 3 is also proposed. A difficulty for k = 2 is pointed out.

Certain necessary and sufficient conditions on the k sample sizes nl , n2,. • • , r t k

which guarantee that/5 has a smaller variance than each of J~i, i = 1, 2 , . . . , k, are given in Section 4. In Section 5 we investigate some decision-theoretic properties

Page 3: Estimation of a common mean of several univariate inverse Gaussian populations

ESTIMATION OF COMMON MEAN OF INVERSE NORMALS 359

of t5 under squared error loss, and establish its admissibility within a suitable class. The inadmissibility of ]5 under some a pr ior i information about A1,. . . , Ak is pointed out in Section 6.

2. Maximum likelihood estimation of it

The maximum likelihood estimator of it is obtained by maximizing the likeli- hood function with respect to #, A1,. •., Ak. Direct maxmization of the likelihood function yields

(2.1) fi _ ~-~i=1 n i x i A i Ai = ni k ' i = 1 , . k. ~-~=1 n~Ai ui + ni2i ( ~ 1 ) 2, "

thus resulting in the following equation for ft:

k 2 -3 k 2 -2 E nixi Z nixi

(2.2) u . ; ,2~ 2 = it ^2-2 " i=1 ~t~ ~i ~- ~iZi( 2i --~)2 i=1 ltiit xi "~ niZi('Xi- ~)2

In general, the above leads to an equation involving (2k - 1)-th degree poly- nomial in fi, and hence, computation of the MLE of it is quite complicated. For k = 2, (2.2) yields the following cubic equation in ~:

(2.3) g(ft) -- aft 3 - b/52 + cft - d = 0

where

a = nln2(nlxl + n222) + 5:15:2(n12u2 + n2ul ) ,

b = 2nln2XlX2(n1 + n2) + r t l r t2(nl :~ + n2x22) + x122(n1221u2 + n225:2Ul),

c = n l n 2 x l x 2 ( n l + n2)(~21 + 22) + n t n 2 2 1 2 2 ( n 1 2 1 + n222),

d = + n2).

It is not difficult to verify that for 21 < 22, g(21) < 0 and g(22) > 0 while for 21 > 22, g(21) > 0 and g(22) < 0. Hence the above equation has at least one root between 21 and 22. A sufficient condition for the uniqueness of the root of (2.3) is b 2 - 3ac < 0 (see Uspenski (1948), pp. 86-87). Note that this condition holds with probability one as nl or n2 --* oo.

Once/2 is obtained from (2.2), the MLE's of A~'s are easily obtained from (2.1). ^

Let 0 = (p, A1,. . . , Ak) and 0 be the MLE of 0. It is straightforward to derive the information matrix I(O) and thus conclude that the asymptotic covariance matrix of 0 is given by

(2.4) I-1(0) = diag k , , " " ,

Page 4: Estimation of a common mean of several univariate inverse Gaussian populations

360 MANZOOR AHMAD ET AL.

Here /5, A1, . . . , Ak are all mutual ly independent. Moreover, the MLE of tt has the same asymptot ic variance as tha t of the es t imator in (2.1) with Ai's assumed known. A be t te r limiting distr ibution of ~t compared to the normal approximation can be obtained by adhering to the fact that as n~ -~ c~,

k k E =I ni),i21 ni .f:i E i----1 ----+ 0 a.s . k k ^

Since E ~ - I niAif~i / k ~-~i=1 niAi has an exact IG distr ibution with parameters (/Z~ k ~-~=i n~A~), the limiting distr ibution of ft can be approximated by IG(#,

k E,=l 3. GraybilI-Deal type unbiased estimator of # and its variance

k It is easy to verify tha t when the parameters Ms are known, ft = E i _ - I n i ) ~ i x i / k ~=1 niAi is the U M V U E of #. It is also the MLE of tt (see (2.1)). Moreover/5 is

a complete sufficient statistic for #, and B ~ IG #, ~i=1 ni /~i • In what follows,

when the Ms are unknown, we can replace them by their suitable est imators and consider the resultant /5. Writing Ai = 1/a~, i = 1, 2,. . . , k, and noting that Ui ~ cr 2-i X(n~-I),2 i = 1, 2 , . . . , k, we propose the pooled es t imator /5 of # given by

{ I_1 (3.1) / 5 = { ~ n i ( n i - l ) X i } ~ n j ( n j - l ) i=1 Uii j=l Uj

=

Vi i= i V

with a~ = ni(ni - 1), i = 1, 2 , . . . , k. Using the fact tha t )( i 's and U~'s are independent for all i = 1, 2 , . . . , k, it

follows upon using a s tandard conditional argument tha t E[/5] = # and

(3.2) Var(/5) = Var{E(/5 I U I , . . . , Uk)} + g{Var(/5 [ U I , . . . , Uk)}

= ] ~t3E Eik=l 2 2 2

In the case k -- 2, the variance of/5 can be expressed as an explicit function of the parameters #, ¢2 = 1/Ai, and the sample size ni, i = 1, 2, as in the case of two normal populat ions (see Khatr i and Shah (1974) and Nair (1986) and one gets

(3.3) Var(/5) = ~ 1 (i + 1)(1 - p)i - 2 B ( ~Ttl~- , 2 2 )

+ p- 2ml ( i = + 1)(1 -

Page 5: Estimation of a common mean of several univariate inverse Gaussian populations

ESTIMATION OF COMMON MEAN OF INVERSE NORMALS 361

Here mi = ni - 1, i = 1, 2, p = (nlmlA1)-ln2m2A2, and the expansion is valid for 0 < p < 1. A similar expansion obtains for p > 1.

An alternative expression for Var(/~), as a weighted finite sum of incomplete be ta integrals which may be more convenient for actual computation, can be de- rived when hi, for i = 1, 2, is an integer of the form 2ki + 3 for some positive integer ki. It can be easily shown under the same conditions, i.e., 0 < p _< 1, tha t

(3.4) V a r ( f i ) - p3 pk2 - 1

nlAl(1 - p)kl+k2-1B(kl + 1, k2 + 1)

( ( ) j=o kl + 1 j=o J

where

Ij = ( -1 ) k2-j+2 yj-2(1 - y)kl+k2+2-Jdy.

A similar integral representation of the variance of/5 exists for p > 1. We now proceed to obtain an unbiased est imator of Vat(/2) for k > 3. Noting

tha t E[)(i] = #, i = 1, 2 , . . . , k and tha t ) ( 1 , . . . , Xk are independent, it follows tha t an unbiased est imator of #3 based on ) ( 1 , . . . , Xk is given by

i<j<l

Now an independent unbiased est imator of

(3.6) O=E k

E i = I ~2(72/niU?

can be obtained following the ideas of Sinha (1985). This is based on an application of a powerful identi ty for the chi-square distribution (see Haft (1979)).

Towards this end we write

(3.7) k 2

O= a:E i=l ~i

and seek a function ~ i ( U l , . . . , Uk) such tha t

(3.s) 2 2

= E , i = 1 , 2 , . . . , k.

Page 6: Estimation of a common mean of several univariate inverse Gaussian populations

362

(3.9) 1 }/

i<j<l i=1 ~ - ~ ( U 1 , • •

MANZOOR AHMAD ET AL.

Clearly then an unbiased estimator of Var(]~), given in (3.2), is provided by

• , gk)}. From Sinha (1985), it follows that

(3.10) ~i(U1,..., Uk) ---- E al l=O

where

(3.11)

with

Al_i)u:(l+l)2l(l + 1)!

al = (ni + 1)[t](ni + A(_i)Ui) 1+2

nj (n+l)[I]=((n+l)(nl3)-----"l for O . ' ( n + 2 / - 1 )

It is interesting to observe that the above unbiased estimate of the variance of ~ is always positive. Moreover, for computational purposes, if one uses

(3.12) -I

with kO~(m) = ~2~(m)(Ut,..., Uk) = E?_.~ 1 al then it again follows from Sinha (1985) that

i=t nT+l "

Hence, for moderately large values of the hi'S, using m as low as 2 or 3 one would get a good approximation to the unbiased estimator of the variance.

For k -- 2, the above argument cannot be used because we do not readily have an unbiased estimator of #3 based on just )71 and X2. However, noting that

E ( X 1 - 2 2 ) 2 =# 3 ( 1

one can write Var(~) as

(3.13)

+ ~ \ n l n2 /

Page 7: Estimation of a common mean of several univariate inverse Gaussian populations

ESTIMATION OF COMMON MEAN OF INVERSE NORMALS 363

An unbiased estimator of Var(fi) can therefore be obtained once we have de- rived functions tYi(U1, U2), i = 1, 2 such that

k nl n2 /

Unfortunately, however, it turns out that such ~i 's do not exist. This is primarily a2.2 and V2 because, based on two independent chi-square variables V1 ~ l Xml

a22X,~2, there does not exist an unbiased estimator of a~/ (a~ + a22). A nearly unbiased estimator of the above expression is obtained by replacing the factor (cr21/nl)(a21/nl + a22/n2) -1 by some constant 5, 0 < 5 < 1. We therefore propose

{ 8(c?/U1) + (1 - 5)(a2/U2) ) ( )

(3.15) Var(/5) = (Xl - - -~2) 2 .El2_ 10ti/Ui.2

as an approximate unbiased estimator of Var(/5). Typically, if nl = n2, 5 = 1/2 may be a reasonable choice.

4. Necessary and sufficient conditions under which/5 is better than Xi for all i

The following general result essentially follows from Norwood and Hinkelmann (1977). Let Y1,.-. , Yk, 1/1, V2,. . . , Vk be independent random variables where Yi's have a common mean # and Var(Yi) -- CT~ for some constant c > 0. Furthermore,

2 Xm~, i = 1 , . . . , k. It is well known that when Ti'S are known, the best linear unbiased estimator of # is given by

- 1

For unknown Ti'S, we consider a natural estimator of # given by

Clearly the estimator/2 is unbiased for tt. We now seek conditions under which/5 has a smaller variance compared to each Yi, i -- 1 , . . . , k. This is contained in the following theorem.

THEOREM 4.1. The es t imator ft is better than any Yi in the sense that Var(/5) < m -2, i = 1, 2 , . . . , k, i f either (a) mi > 9, i = 1, 2 , . . . , k or (b) mi = 9 for some i and m j > 17, j = 1, 2 , . . . , k; j ¢ i.

Taking Y~ = f f i , m i = h i - 1, c = # 3 T.2, = 1 / (n iA i ) , Vi = U i / m i n i , i = 1 , . . . , k, and applying the above theorem, it follows that for estimating the common mean # of k inverse Gaussian populations/5 is better than the individual sample means if and only if either ni >_ 10 for all i or ni = 10 for some i and nj ~ 18 for j ¢ i.

Page 8: Estimation of a common mean of several univariate inverse Gaussian populations

364 MANZOOR AHMAD ET AL.

5. Decision-theoretic properties of/~

In this section we discuss some decision-theoretic properties of t~ under the squared error loss function L(5, #) = (5 - #)2. Towards this end we first consider a meaningful class of estimators of # of the form

k

(5.1) /5~ = E X i ¢ i ( U i , . . . , Uk) i=1

where ¢ i (Ui , . . . , Uk)'s are nonnegative real-valued functions of U1, . . . , Uk, sat- isfying k ~ = 1 ¢~(U1,... , Uk) = 1 with probability one. Clearly, any estimator of the form ;5~ such that E{¢ i (U i , . . . , Uk)} < c~, i = 1 , . . . , k, is unbiased for p. Moreover, using independence of .~i's and Ui's and the conditional argument as in (3.2), it follows that

(5.2) risk of ~ = E tt3 ~2(Ul,.. •, Uk)/niAi •

Now if we choose a prior density of the form hi(#)h2(Ai, . . •, Ak), the Bayes risk of/~¢ can be written as

(5.3) )}] Bayes risk of t~¢ = [E(#3)] E E ¢2(U1,. . . , Uk)/niAi \i----1

where the innermost expectation is with respect to the U(s for fixed A1,. . . , Ak, and the second expectation is with respect to the A's under h2(A1,.. . , Ak). As- suming that E(# 3) < oo and E(A~ -1) < oo, i = 1, 2 , . . . , k, it follows by Fubini's theorem that the unique Bayes estimator of p is given by

k

(5.4) ~Bayes ~--- E Zi¢iBayes(Ul'"" Uk) i = l

where

(5.5) { 1 ~A, \ / -./I/{ ~ I }',u(i/n, Aj) ¢iBayes(Vl,..., Uk)= ~:o.vil/nili ~ ' j--1

i = l , . . . , k

and E~lv(. ) above denotes expectation with respect to the posterior distribution of A = (A1,.. . , Ak), given U --- ( U 1 , . . . , Uk).

The equation (5.4) provides a class of admissible estimators of # in the class ~¢. The treatment here is similar to Zacks (1966) for the normal distribution. It is not difficult to show that the Graybill-Deal type unbiased estimator ~ is a generalized Bayes estimator since it results from a choice of the prior of the form

Page 9: Estimation of a common mean of several univariate inverse Gaussian populations

ESTIMATION OF COMMON MEAN OF INVERSE NORMALS 365

h2(A)c~I-I~=l A~ -U2-1 and hi(#) satisfying E(# 3) < oo. Unfortunately, however, such a density of A is improper and the admissibility of/5 does not readily follow. On the other hand, if we restrict our attention to a subclass of/5o of the type

(5.6) = \ u k " ' ] ' i=1

it follows from Sinha and Mouqadem (1982) that for k -- 2,/5 is admissible within this class. It may be noted that the normality of )(~'s in Sinha and Mouqadem (1982) is not crucial for this purpose in view of the representation (5.2) and the same distribution of Ui's in the inverse Gaussian case as in the normal case.

We now turn our attention to a discussion of equivariant estimators of #. Un- like in the normal case, in the context of several inverse Gaussian distributions, an estimator f~(X1,... , Xk; U1, . . . , Uk) is scale equivariant if f~(.) is scale preserving, i.e. /~(.) satisfies

(5.7) # 2k; U1 • - - 7 ° . ° , - -

o~ vk) = xk; ul,..., uk),

for all a > 0. A characterization of fL(.) subject to (5.7) is provided by

(5.8) #(xl , . . . , 2k; k

UI , . . . , Uk) = ~-~fi: i¢i(R1,. . . , Rk-1; T1 , . . . , Tk) i = l

where P~ -- X j X k , i = 1 , . . . , k - 1, Ti = )~iUi, i - 1 , . . . , k, and ¢i(.)'s are real-valued functions adding up to one.

It may be noted that the estimators/5¢ described in (5.1) are equivariant if and only if ¢1,- - . , Ck are scale invariant, i.e., ¢'s depend on the U's only through the ratios UjUj 's . In general an equivariant estimator of # is not unbiased unless it is of the form/5o. See Hirano and Iwase (1989a, 1989b) and Zacks (1970) for some related results.

An attempt to derive the Bayes equivariant estimator of # in general under a prior distribution of # and A1,. . . , Ak has been unsuccessful even for k = 2. This is because although the conditional moments of )(1 and )~2, given T1, T2 and R1, can be explicitly computed involving Bessel functions of the second kind, the computation of the expectations of the resultant expressions with respect to any meaningful prior turns out to be a formidable task.

6. Inadmissibi l i ty of

In this section we mention the inadmissibility of/5 under squared error loss when some a priori information about A's is available by actually constructing an improved estimator of #. Details are omitted since the technique is similar to what Sinha (1979) used in the case of normal population. Without loss of generality let

Page 10: Estimation of a common mean of several univariate inverse Gaussian populations

366 MANZOOR AHMAD ET AL.

us assume that A1/Aio >_ co for some i0 >_ 2, and consider estimators of # of the form

'I '(21,..., Zk, gl , . . . , Uk) = 21 (k)k 1 - '5-'~ ¢ j + E Cj)(j =/5¢ (say)

j=2 j----2

where

= ( ¢ 1 , . . . , Ck),

k

¢1 = 1 - ¢5 - ¢ 5 ( u ) ,

5=2

j - - 2 , . . . , k .

It then follows from Sinha (1979) that/5¢. , where ¢* = (¢~, . . . , ¢~) with ¢; = Cj for j # io and ¢io = min{¢io, (1 + conl /no)- l} , provides uniform improvement over /5¢ provided the conditions ~i#io ¢i _> 0 and ¢io > (1 + conl/no) -1 hold with a positive probability. This would in turn imply that one can improve upon /5 since it corresponds to/5~o with G0 = ( e ra , . . . , ¢ok) where

¢oi -- c~iU1 1 -~- 2 °LjUl/°L1Uj Og l U i

j=2

i = 1 , . . . , k. The improved estimator is given by/5¢; because ¢0i's do satisfy the above condition with a positive probability. It should be noted that /5¢; can be interpreted as a testimator.

Acknowledgements

Our sincere thanks are due to two referees for their very helpful comments on the earlier version of the manuscript.

R E F E R E N C E S

Banerjee, A. K. and Bhattacharya, G. K. (1976). A purchase incidence model with inverse Gaussian interpurcha.se times, J. Amer. Statist. Assoc., 71,823-829.

Bhattacharya, C. G. (1980). Est imation of a common mean and recovery of interblock informa- tion, Ann. Stat is t , 8, 105-211.

Bravo, G. and MacGibbon, B. (1988). Improved estimation for the parameters of an inverse Gaussian distributin, Comm. Statist. A - - T h e o r y Methods, 17, 4285-4299.

Brown, L. D. and Cohen, A. (1974). Point and confidence estimation of a common mean and recovery of inter-block information, Ann. Statist., 2, 963-976.

Chhikara, R. S. and Folks, J. L. (1989). The Inverse Gaussian Distribution, Dekker, New York. Folks, J. L. and Chhikara, R. S. (1978). The inverse Gaussian distribution and its statistical

application, J. Roy. Statist. Soc. Set. B, 40, 263-289. Graybill, F. A. and Deal, R. B. (1959). Combining unbiased estimators, Biometrics, 15,543-550. Haft, L. R. (1979). An identity for the Wishart distribution with applications, J. Multivariate

Anal., 9, 531-544. Hasofer, A. M. (1964). A dam with inverse Gaussian input, Proc. Camb. Phil. Soc., 60,

931-933.

Page 11: Estimation of a common mean of several univariate inverse Gaussian populations

ESTIMATION OF COMMON MEAN OF INVERSE NORMALS 367

Hirano, K. and Iwase, K. (1989a). Conditional information for an inverse Gaussian distribution with known coefficient of variation, Ann. Inst. Statist. Math., 41,279-287.

Hirano, K. and Iwase, K. (1989b). Minimum risk scale equivariant estimator: estimating the mean of an inverse Gaussian distribution with known coefficient of variation, Comm. Statist. A- -Theory Methods, 18, 189-197.

Hsieh, H. K., Korwar, R. M. and Rukhin, A. L. (1990). Inadmissibility of the maximum likelihood estimator of the inverse Gaussian mean, Statist. Probab. Lett., 9, 83-90.

Iwase, K. and Seto, N. (1983). Uniformly minimum variance unbiased estimation for the inverse Gaussian distribution, J. Amer. Statist. Assoc., 78, 660-663.

Khatri, C. G. and Shah, K. R. (1974). Estimation of the location parameters from two linear models under normality, Comm. Statist. A--Theory Methods, 3, 647-663.

Korwar, R. M. (1980). On the uniformly minimum variance unbiased estimators of the variance and its reciprocal of an inverse Gaussian distribution, J. Amer. Statist. Assoc., 135, 257- 271.

Lancaster, A. (1972). A stochastic model for the duration of a strike, J. Roy. Statist. Soc. Ser. A, 135, 257-271.

Letac, G., Seshadri, V. and Whitmore, G. A. (1985). An exact chi-squared decomposition theorem for inverse Gaussian variates, J. Roy. Statist. Soc. Ser. B, 47, 476-481.

Nair, K. A. (1986). Distribution of an estimator of the common mean of two normal populations, Ann. Statist., 8, 212-216.

Norwood, T. E. and Hinkelmann, K. (1977). Estimating the common mean of several normal populations, Ann. Statist., 5, 1047-1050.

Pal, N. and Sinha, B. K. (1989). Improved estimators of dispersion of an inverse Gaussian distribution, Statistical Data Analysis and Inference, (ed. Y. Dodge), North Holland, Ams- terdam.

Pandey, B. N. and Malik, H. J. (1988). Some improved estimators for a measure of dispersion of an inverse Gaussian distribution, Comm. Statist. A- -Theory Methods, 17, 3935-3949.

Sheppard, C. W. (1962). Basic Principals of the Tracer Method, Wiley, New York. Sinha, B. K. (1979). Is the maximum likelihood estimate of the common mean of several normal

populations admissible?, Sankhy5 Ser. B, 40, 192-196. Sinha, B. K. and Mouqadem, O. (1982). Estimation of the common mean of two univariate

normal populations, Comm. Statist. A--Theory Methods, 11, 1604--1614. Sinha, B. K. (1985). Unbiased estimation of the variance of the Graybill-Deal estimator of the

common mean of several normal populations, Canad. J. Statist., 13, 243-247. Tweedi, M. C. K. (1957a). Statistical properties of inverse Gaussian distributions I, Ann. Math.

Statist., 28, 362-377. Tweedi, M. C. K. (1957b). Statistical properties of inverse Gaussian distribution II, Ann. Math.

Statist., 28, 696-705. Uspenski, J. V. (1948). Theory of Equations, McGrawhill, New York. Whitmore, G. A. (1979). An inverse Gaussian model for labour turnover, Y. Roy. Statist. Soc.

Ser. A, 142, 468-479. Whitmore, G. A. (1986a). Normal-gamma mixtures of inverse Gaussian distributions, Scand. J.

Statist., 13, 211-220. Whitmore, G. A. (1986b). Inverse Gaussian ratio estimation, Appl. Statist., 35, 8-15. Zacks, S. (1966). Unbiased estimation of the common mean, J. Amer. Statist. Assoc., 61,

467-476. Zacks, S. (1970). Bayes and fiducial equivariant estimators of the common mean of two normal

distributions, Ann. Math. Statist., 41, 59-69.