simulation-extrapolation estimation in parametric

54
SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC MEASUREMENT ERROR MODELS by J. R. Cook and L. A. Stefanski Institute of Statistics Mimeograph Series No. 2224 June 1992 NORTH CAROLINA STATE UNIVERSITY Raleigh, North Carolina

Upload: others

Post on 10-Dec-2021

18 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

SIMULATION-EXTRAPOLATION ESTIMATION IN

PARAMETRIC MEASUREMENT ERROR MODELS

by

J. R. Cook and L. A. Stefanski

Institute of Statistics Mimeograph Series No. 2224

June 1992

NORTH CAROLINA STATE UNIVERSITYRaleigh, North Carolina

Page 2: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

MIMEOSERIES#2224

J. R. Cook andL. A. StefanskiSIMULATIO-EXTRAPOLATION

(/0 ESTIMATION IN PARAMETRICMEASUREMENT ERROR MODELS

I!----..,.N:-:AME-=--'-"==~""'-''--==''""--=D'"'A'''T'"'E''--:

The Lib.rary at the nh~N ""'Jflrtme~t of ~t"'· t·orth Carolina SI", :. ,. -<IS ICS

. ""e Umversity

Page 3: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

SIMULATION-EXTRAPOLATION ESTIMATION

IN

PARAMETRIC MEASUREMENT ERROR MODELS

J. R. CookMerck Research LaboratoriesDivision of Merck & Co., Inc.West Point, PA 19422

ABSTRACT

1. A. StefanskiDepartment of Statistics

North Carolina State UniversityRaleigh, NC 27695

We describe a simulation-based method of inference for parametric measurement error models

in which the measurement error variance is known or at least well estimated. The method entails

adding additional measurement error in known increments to the data, computing estimates from

the contaminated data, establishing a trend between these estimates and the variance of the added

errors, and extrapolating this trend back to the case of no measurement error.

We show that the method is equivalent or asymptotically equivalent to method-of-moments

estimation in linear measurement error modelling. Simulation studies are presented showing that

the method produces estimators that are nearly asymptotically unbiased and efficient in standard

and nonstandard logistic regression models. An oversimplified but fairly accurate description of the

method is that it is method-of-moments estimation using Monte Carlo derived estimating equations.

Key words and phrases: Correction for attenuation; extrapolation; Framingham Heart Study;

logistic regression; measurement error model; method of moments; nonlinear model; simulation.

Acknowledgements: The authors would like to thank a referee and an Associate Editor for

comments that led to substantial improvements in content and exposition. We would also like

to thank Joeseph F. Heyse and Ha H. Nguyen of Merck Research Laboratories, and Dennis Boos

and John Monahan of North Carolina State university for their comments and suggestions on the

direction of our research. Stefanski's research was supported by the National Science Foundation

and U.S. Environmental Protection Agency.

Note: This paper uses data supplied by the National Heart, Lung, and Blood Institute, NIH, and

DHHS from the Framingham Heart Study. The views expressed in this paper are those of the

authors and do not necessarily reflect the views of the National Heart, Lung, and Blood Institute,

or of the Framingham Study.

Page 4: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

1. INTRODUCTION

In this paper we introduce a simulation-based method of inference for measurement error models

applicable when the measurement error variance is known or can be reasonably well estimated,

say from replicate measurements. It combines features of parametric-bootstrap and method-of­

moments inference. Estimates are obtained by adding additional measurement error to the data

in a resampling stage, establishing a trend of measurement-error-induced bias versus the variance

of the added measurement error, and extrapolating this trend back to the case of no measurement

error.

The method lends itself to graphic description of both the effects of measurement error on the

parameter estimates and the nature of the ensuing "corrections for attenuation." It is also very

simple to implement. This is particularly useful in applications where the user, though very familiar

with standard statistical methods, may not be comfortable with the burdensome technical details of

model fitting and statistical theory that generally accompanies all but the simplest of measurement

error models. Since the method is completely general, it is also useful in applications when the

particular model under consideration is novel and conventional approaches to estimation with the

model have not been thoroughly studied and developed.

The method is noteworthy for the ease with which estimates can be obtained, assuming adequate

computing power. However, assessing the variability in the estimates is more difficult. In our work

we have opted to use the bootstrap or jackknife to assess sampling variability. An example is

presented in Section 5.2.

We developed the procedure in response to the need for fitting nonstandard generalized linear

measurement error models, specifically models in which the mean function depends on something

other than a linear function of the predictor measured with error. In principle, the general

approaches described by Stefanski (1985), Fuller (1987, Ch. 3), Whittemore and Keller (1988),

and Carroll and Stefanski (1990) could be applied to such models. However, in addition to their

complexity and the need for specialized software, these methods entail approximations, the quality

of which would be difficult to verify with complicated models.

Although the attenuating effects of measurement error have been well publicized and documented

of late, see for example Palca (1990a,b) and MacMahon et al. (1990), it often happens that

1

Page 5: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

preliminary to a measurement error model analysis it is necessary to explain the need for such

an analysis to nonstatisticians. The fact that measurement error in a predictor variable induces

bias in least squares regression estimates is counter-intuitive to many, even those with training in

statistics. A by-product of our method is a self-contained and highly relevant simulation study that

clearly demonstrates the effect of measurement error and the need for bias corrections.

When corrections for attenuation are made, they are often regarded by many with skepticism

above and beyond the normal amount that should accompany any statistical modelling. The reason

is that corrections for attenuation are often in the researcher's best interest. For example, assuming

that the presence of an effect has been convincingly demonstrated, the perceived importance of

the effect is likely to depend on its magnitude. A correction for attenuation generally increases

the magnitude of the effect, thus it is in the researcher's interests to make such corrections.

For an example in health economics, consider a situation wherein a pharmaceutical company is

estimating the cost effectiveness of a blood pressure-lowering drug. A correction for attenuation

due to measurement error in blood pressure readings produces a greater predicted decrease in

risk for a given reduction in blood pressure, see MacMahon et al. (1990) and Palca (1990a,b).

Consequently the cost effectiveness of the drug is enhanced. Clearly a correction for attenuation is

in the company's best interest and is likely to be viewed with skepticism by anyone not versed in

the field of measurement error modelling.

In the situations just described we believe that there is a need for corrections for attenuation

that are both statistically and scientifically defensible while also being demonstratively conservative.

By suitable adjustment of the extrapolation step, our method is capable of producing estimates

with the desired properties. The trend line referred to in the first paragraph is generally monotone

convex or concave and thus it is possible to obtain best linear-tangential approximations that result

in conservative corrections for attenuation.

1.1. Organization of the Paper

In this paper we describe our method in the context of parametric models with one predictor

measured with error. The assumptions we employ are set forth in Section 2 along with a description

of the basic method. A heuristic explanation of the procedure is given in this section as well.

An example illustrating the method is presented in Section 3. Section 4 describes the application

2

Page 6: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

of the method to standard and nonstandard linear and logistic regression models. Simulation results

are used to demonstrate the utility of the procedure.

In Section 5 we consider application to a subset of data from the Framingham study. Here we

use a novel resampling technique to illustrate the performance of the method with real data. We

also consider fitting some nonstandard logistic regression models to illustrate the ease with which

new and complicated models can be handled.

2. BASIC METHOD and REFINEMENTS

2.1. Simulation Extrapolation Estimation

The majority of applications involve regression models and we adopt a notation convenient for

such problems. Let Y, V, U and X denote respectively the response variable, a covariate measured

without error, the true predictor and the measured predictor. In this paper we restrict attention to

cases in which U and X are scalars. Furthermore it is assumed that the observed data {Yi, Vi, Xdrare such that

where Zi is a standard normal random variable independent of Ui, Vi and Yi, and (12 is the known

measurement error variance.

We suppose the existence of an estimation procedure that maps a data set into the parameter

space; for example, linear or logistic regression. Let f) E 0 denote the parameter and T the

functional that maps the data set into 0. Then we can define the following estimators:

0TRUE = T({Yi, Vi,Ui}I);

0NAIVE = T({Yi,Vi,Xi}I).

Since 8TRUE depends on the unknown {Ui}}, it is not an estimator in the strict sense. Nevertheless

it is convenient to have a notation for this random vector.

For ,X 2: 0, define

Xb .(,X) = X· + ,Xl/2(1Zb .,I t ,I'

where {Zb,di=l are mutually independent, and independent of {Yi, Vi,Ui,Xi}I, and identically

distributed standard normal random variables. Define

3

Page 7: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

and

(2.1)

that is, the expectation above is with respect to the distribution of {Zb,df=l only.

Note that 0(0) = Ob(O) = 0NAIVE. Exact determination of O(A) for A > 0 is generally not

feasible, but it can always be estimated arbitrarily well by generating a large number of independent

measurement error vectors, HZb,df=l}P=l' computing Ob(A) for b = 1, ... ,B and approximating

O(A) by the sample mean of {Ob(A)}f. This is the simulation component of our method. We call

the {Zb,i} i:1 pseudo errors.

The extrapolation step of the proposal entails modelling O(A) as a function of A for A ~ 0 and

using the model to extrapolate back to A= -1. This yields the simulation-extrapolation estimator

denoted 0SIMEX.

As an illustration of the SIMEX algorithm we describe its application to the familiar components­

of-variance model. With this model only {Xdl is observed and 0 = O'b, the variance of U. In this

case 0NAIVE = s~, the sample variance of {Xi}}, and

. 1 ~ - 2Ob(A) = n _ 1 L..".(Xb,i(A) - Xb(A))

i=ln

= _1_ L:(Xi + A1/2O'Zb,i - X - A1/2O'Zb)2.n-l i=l

For this model it is well known that the expectation indicated in (2.1) is just

but we emphasize that it can also be estimated arbitrarily well by B-1 E~=lOb(A).

In this case B(A) is exactly linear in A ~ 0 and extrapolation to A = -1 results in BSIMEX =s~ - 0'2, the familiar method-of-moments estimator. Note that 0SIMEX is an unbiased and

consistent estimator of o.

2.2. Heuristics

In this section we give a heuristic explanation for SIMEX estimation. In later sections asymptotic

arguments and extensive simulation evidence for the procedure's utility in practice is presented.

4

Page 8: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Consider the naive estimator as the sample size n increases. It converges in probability to some

limit depending on the true value of the parameter, say 00 , and the measurement error variance

(72. Call the limit T(00 , (72). If we assume that the naive estimator is a consistent estimator of 00

in the absence of measurement error ((72 =0), then T(Oo, 0) =00.

Now as n increases 8(A) also converges in probability to some limit and we would expect that

this limit can be represented as T(Oo, (72 +A(72). Evaluating the latter expression at A = -1, yields

T(Oo, (72 - (72) =T(Oo, 0) =00. Since 8SIMEX approximately estimates T(00, (]'2 + A(72 h=-b it will

be approximately consistent for 00 in general. An estimator is said to be approximately consistent

if it converges in probability to some constant that is approximately equal to the estimand, e.g., f

is approximately consistent for ro, if f converges in probability to r* and r* ~ ro.

The first part of this heuristic argument can be made rigorous under the assumption of normally

distributed measurement errors and continuity of the functional T. We now do so for the structural

case in which (Yi, Vi, Ui ) are independent and identically distributed.

Let Fy,v,u denote the distribution function of (Y1 , Vi, U1 ). Regarding T as a functional on

distributions, we have 00 = T(Fy,v,u). The distributions of (Y, V,X) and (Y, V,X(A» are

FY,v,u*<I>(0,0,u2) and FY,v,u*<I>[O,0,U2(I+A») respectively, where * indicates convolution and <I>[T?,Ti,T;)

denotes a normal distribution with block-diagonal covariance matrix, diag (rr, r?, rn.

By continuity of T, the limit (n - 00) in probability of O(A) is O( A) = T(Fy,v,u *<I>[O,0,u2(I+A)])'

It follows, again by continuity of T, that O(A)IA=-l = T(Fy,v,u * <I>[O,O,O]) = T(Fy,v,u) = 00.

Thus with normally distributed errors, O(A) consistently estimates O(A) for A ~ 0. The only

approximation involved in SIMEX estimation is the extrapolation of this curve to A = -1. The

extrapolation step is crucial and will be discussed in Section 2.4.

2.3. Some Refinements

There may be estimators for which the expectation indicated in (2.1) does not exist. This mayor

may not cause problems with the use of the sample mean as a location estimator in the simulation

step. However, it may be necessary to summarize the simulated estimates with a robust estimator

of location such as the median.

Our measurement error model assumes independence between measurement errors and the other

variables in the data sets. Although the lID pseudo errors are generated under these assumptions,

5

Page 9: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

the simulated sets of errors have nonzero sample correlations with Y, V or X, nonzero sample

means, and sample variances :I 1. The effects of these random departures from expected behavior

can be eliminated by generating the pseudo errors to be uncorrelated with the observed data and

normalized to have sample mean and variance equal to 0 and 1 respectively. We call these NON­

lID pseudo errors. Using NON-lID pseudo errors improves the convergence of B-1 2:~=1 (h(A) to

O(A) as B -+ 00. For example, if NON-lID pseudo errors are used in the components-of-variance

application, B-1 2:~=1 Ob(A) = 9(A) for all B.

2.4. Methods of Extrapolation

The Achilles' heel of our proposed estimator is the extrapolation step. If this cannot be done

in a convincing and demonstrably useful fashion, our estimation method is of limited utility.

However, the approach has considerable merit regardless of the adequacy of the extrapolation,

as the plot of O( A) versus Ais always informative, especially in complex models where the direction

and approximate magnitude of the measurement-error induced bias is not obvious. This graphic

representation is especially useful in conveying the effects of measurement error to nonstatisticians.

It also provides insight as to the appropriate functional form of the extrapolant.

In the simulation results that follow we employed three extrapolants: (i) a two-point linear

extrapolation; (ii) a three-point quadratic extrapolation; and (iii) a nonlinear extrapolation based

on fitting mean models of the form Jl(A) = a +bj(c +A) to the components in 9(A).

When the measurement error is normally distributed, each of these three extrapolants is exact for

certain estimators, i.e., there are functionals T for which T(Fy,v,u *c)[O,O,q2(l+,x»)) is exactly linear,

quadratic, or of the form Jl(A) = a +bj(c +A). The linear extrapolant is exact for estimators that

are linear functions of the first two moments of the data, as in the components-of-variance problem.

It can be shown that the quadratic extrapolant is exact for estimators that are linear functions of

the first four moments of the data. In Section 4.1 it is shown that the nonlinear extrapolant, Jl(A),

is exact for certain linear and nonlinear regression estimators.

More important than the exactness of these extrapolants for certain estimators is the fact that

for any particular estimator it is very likely that at least one of the three will provide a sufficiently

good approximation. A survey of the literature on measurement error in nonlinear models indicates

that approximate estimation methods are common and have proven to be useful in applications.

6

Page 10: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

The reference list for this paper contains fourteen statistical papers on measurement error modelling

of which eleven promote methods that yield approximately consistent estimators in general.

Justification for the adequacy of the approximations employed in nonlinear models with mea-

surement error generally consists of: i) small measurement error orders-of-magnitude calculation of

asymptotic bias; or ii) empirical or simulation evidence ofthe utility ofthe approximate methods. In

Sections 4 and 5 we provide simulation and empirical evidence supporting our proposed methods.

This section concludes with a calculation of the order-of-magnitude of the asymptotic bias in

8SIMEX·

It is well known that 8NAIVE has an asymptotic bias that is 0(0'2) as 0'2 - 0 in general

(Stefanski, 1985). The approximate methods studied thus far in the literature produce estimators

whose asymptotic bias is 0(0'4) in general; see Stefanski (1985), Whittemore and Keller (1988),

and Carroll and Stefanski (1990). We now argue that 8SIMEX has an asymptotic bias that is

0(0'4) when the linear extrapolant is used, and 0(0'6) when the quadratic or nonlinear extrapolant

is employed.

In the notation of Section 2.2, 8(A) converges to 8(A) = T(Fy,v,u *~[O,O,q2(I+.x)l)' Note that 8(A)

depends on Aonly through 0'2(1 +A) and thus we can write 8(A) = /(0'2(1 +A» for some function

/. As long as T is a "smooth" functional,

/(r) = /(0) + j'(O)r +r 21"(0)/2 +O(r3),

It follows that for suitably defined ao, bo and Co,

8(A) = ao +bo(1 +A) +co(1 +A)2 + f.x,

where for any A < 00,

r ~ o.

A> -1- ,

sup I f.x 1= 0(0'6).-19~A

Let 0 = Al < ... < Ak = A, k ~ 3. Consider fitting the quadratic model Q(A,a,b,c) =

a + b(1 + A) + c(1 + A)2 to {8(Ai), Adt via least squares, obtaining a, band c. Since (a - ao),

(b - bo) and (c - co) are linear functions of f.x" i = 1, ... , k, it follows that I0,- ao I, Ib - bo I and

Ic-co I are all 0(0-6).

Consequently it is readily established that

sup I8(A) - Q(A,o',b,c) 1= 0(0'6)-19~A

7

Page 11: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

and in particular that 10(-1) - Q(-I,a,h,c)/ = 0(0'6). Now 0(-1) = 0o, the parameter we are

interested in estimating, and Q( -1,11, h, c) is the limit in probability of 0SIMEX when the quadratic

extrapolant is employed. Thus the asymptotic bias in the quadratic extrapolant version of 0SIMEX

is 0(0'6) as claimed.

Had we started with a linear mean model instead of the quadratic model, the errors, fA' in our

regression model would be 0(0'4); and an identical argument shows that the asymptotic bias in

0SIMEX is 0(0'4) in this case.

The nonlinear mean model a +bI(c+A) is more difficult to analyze. However, once it is realized

that the coefficients bo and CO in the expansion of O(A) are of orders 0(0'2) and 0(0'4) respectively,

it is not difficult to see that the nonlinear model is capable of providing as good a fit to O( A) as the

quadratic model in general. In fact any parametric model that is locally quadratic (with three free

parameters locally) would work as well. Thus the asymptotic bias in 9SIMEX is of the same order

of magnitude for the quadratic and nonlinear extrapolant functions, i.e., 0(0'6).

Favorable mathematical arguments notwithstanding, there remains the problem of selecting a

suitable extrapolant for a given problem. If the goal is to make conservative adjustments for

attenuation, then only the linear or perhaps the linear and quadratic extrapolants should be

considered. If unbiased adjustments are sought, then all three extrapolants should be considered

and the decision between competing extrapolants can be based on the data. Since B(Aj) can be

determined to arbitrary accuracy and for arbitrarily many Aj, there is no problem determining

which of the three extrapolants fits best. Of course it may be that none of the extrapolants fit

adequately; but again this fact can be determined from the data to a great extent simply by

examining the plot of 9(A) versus A.

3. An EXAMPLE

We now present an example to describe the type of analysis we have in mind and to point out

some concessions for computational speed that were made in the simulations that follow.

We generated n = 1500 independent observations from the logistic regression measurement error

model

x = u + O'Z,

preY = 1 I u, V) = F(f31 +f3uU +f3vV + f3uv UV),

8

(3.1)

Page 12: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

where F is the logistic distribution function and (U, V) are bivariate normal with zero means,

unit variances and correlation = 1/-/5 :::::: 0.447. The regression parameters were set at /31 = -2,

/3u = 1, /3v = 0.25 and /3uv = 0.25, and we set u2 = 0.5.

The sample size is consistent with large epidemiologic studies of the type motivating much of the

current research in measurement error models, although the parameter values were not matched

with any particular study. A full simulation study of this model is described in Section 4.5. Here

we analyze one generated data set from that study.

Our algorithm calls for generating pseudo errors {Zb,i}f=I' forming pseudo predictors Xb,i(A) =Xi +AUZb,i, and fitting the logistic model (3.1) to {Yi, Vi, X b,i(A)}f=1 for several values of A. This

is repeated for b =1, ... ,B.

In any particular application, the grid of lambda values can be extensive and B can be chosen

so large that the Monte Carlo error is negligible, without making the computations prohibitively

time consuming. Computing time is a factor in simulation studies, however. Thus in the simulation

studies that follow we employed a relatively course grid, A E {O, 0.5, 1.0, 1.5, 2.0}, and B was

chosen after some preliminary investigations of the Monte Carlo variability and computational time.

However, for this example we took AE {O, 1/8, ... , 15/8, 16/8}, with B =50.

In all our work only one set of pseudo errors was used to calculate the estimates 8b(Ai) for different

values of Ai for a fixed b. That is, for a given b, 8b(Ai) was calculated from {Yi, Vi, Xi+A~/2 uZb,df:l'

where Zb,i did not vary with j.

Figures 1a-d display the results of the Monte Carlo step and the extrapolation step. Although

the D(A) are plotted for the the large grid defined above, only the four points A = .5, 1, 1.5, 2.0

(open boxes), and the naive estimate A = 0 (open box), were used in the extrapolation step.

Linear extrapolations used A= 0, 1; quadratic extrapolations used A= 0, 1, 2; and the nonlinear

extrapolations used the five points plotted with open boxes to fit the model.

The linear, quadratic and nonlinear fits and extrapolants are graphed with dotted, dashed and

solid curves respectively. These graphs were made from the NON-IID pseudo errors with the sample

mean as location estimator. The other combinations oflocation estimator and IID/NON-IID errors

produced nearly identical results for this data set.

The plots make the qualitative effect, i.e., the direction of the bias, of the measurement error

9

Page 13: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

readily apparent. Although it might be argued that the plot in Figure 1b merely confirms intuition,

the same cannot be said of the plots in Figures la, c and d.

For this particular data set and choice of Pi}, the estimators are well ordered with ordering

NAIVE> LINEAR> QUADRATIC> NONLINEAR, with respect to estimation error, i.e. the

NAIVE estimate is worst, the NONLINEAR extrapolant is best for all regression coefficients.

Absent from each of Figures la-d is an indication of the sampling variability in the estimates.

However, since the estimates were derived from one data set selected from the simulation study in

Section 4.3, the smoothed histograms in Figures 6a-d can be used to infer sampling variability. In

particular it is readily apparent that none of the estimates in Figures 1a-d, i.e., the extrapolations

to A = -1, differ significantly from its estimand.

Residuals from the quadratic and nonlinear model extrapolant fits are plotted for the coefficient

of U in Figure 2. The linear model residuals are orders-of-magnitude larger and were intentionally

omitted. Also omitted are the residual plots for the other three coefficients as these exhibit patterns

that are nearly identical to that in Figure 2. The residuals in Figure 2 are extremely highly

correlated due to the manner in which the same pseudo errors were used to calculate 8b( Aj) for

j = 2, ... ,5. Thus the smooth pattern in the residuals from the nonlinear fit is not unexpected,

and it is definitely not prima facie evidence of lack of fit. However, it is evident that the nonlinear

extrapolant fits much better that the quadratic or linear extrapolants. Thus for this example the

data clearly indicated the best extrapolant to use among the three candidates.

The fit of the nonlinear model is excellent and instills confidence in the extrapolation step.

Remember that only the box points were used in fitting the model. In later sections we prove that

the mean function a +b/(c+A) is exactly appropriate for linear models, but we were unexpectedly

pleased by the quality of the fit in logistic models.

The plots suggest that a two-point extrapolation based on a small positive value of A and A = 0

would perform well. Our experience to date suggests that this is the case. The plots of 8(A) are

generally convex or concave and thus a tangential approximation is the best linear extrapolant

in a conservative sense. The problem is that a two-point extrapolation based on two close AS is

very sensitive to the Monte Carlo sampling error. Thus we have not studied it in our simulations.

However, it should perform well whenever Monte Carlo error can be made negligible.

10

Page 14: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Also it is likely that a quadratic extrapolant based on three As close to zero would work well

provided the Monte Carlo error is negligible. Our experience is that the curvature in the graph of

D(A) increases as Adecreases. In such cases it is possible to argue that a local three-point quadratic

extrapolation would be conservative in the same sense as a tangential linear extrapolant.

Figure 3 illustrates the estimated response functions corresponding to the SIMEX estimators

and the NAIVE estimator. Graphs of (3.1) are plotted with v = 1 and -5/..;5 ~ u ~ 7/..;5. The

solid curve with the least average slope corresponds to the NAIVE estimate. The other solid curve

is the population response curve, i.e., the estimand. The long-dashed, short-dashed and dotted

curves correspond to the linear, quadratic and nonlinear extrapolants respectively. Note that the

conservatism of the linear and quadratic extrapolants manifests itself in the estimated response

functions as well.

4. PARAMETRIC REGRESSION MODELS

4.1. Linear Regression

Simple Linear Regression: We developed the simulation/extrapolation procedure primarily for

use in models for which the exact effects of measurement error cannot be determined. However,

it is instructive to investigate how new methods perform in familiar, well-understood models and

hence we study the simple errors-in-variables model first. An added benefit of studying this model

is that the validity of the heuristics in Section 2.1 is readily confirmed.

We consider least-squares estimation of a line. Thus the functional T returns least-squares

estimates of an intercept and slope. Let () = (f31' f3u ? Then

and• SyX +A1/ 2(7SYZb nf3u(A) = E{ S +A 2S +2A1/2 S I {Yi,Xih},xx (7 ZbZb (7 XZb

where conventional notation is used for sums of centered cross products. For the NON-liD pseudo

errors, SYZb = Sx Zb = Zb =0 and SZbZb = n, and thus

and• SYX

f3u(A) = S A 2'XX + n (7

(4.1)

Note that both ~1 (A) and ~u(A) in (4.1) are functions of A of the form a + bj(c + A) and that

evaluation at A = -1 yields the usual method-of-moments estimators (Fuller 1987, p. 14).

11

Page 15: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Asymptotically (n - 00) the difference between estimators based on the NON-lID and lID pseudo

errors is negligible. Thus (4.1) holds asymptotically for the lID pseudo errors as well.

Although the nonlinear extrapolant estimators derived from the NON-lID pseudo data are

identical to the usual method-of-moments estimator, the other combinations of extrap01ant type,

location functional, and error type yield different SIMEX estimators. We conducted a simulation

study to compare these estimators with some common errors-in-variables estimators. In addition

to the SIMEX estimators we included in the study the method-of-moments estimator (MOM),

Fuller's modified moment estimators (Fuller, 1987, p. 164) with a = 2 and 5 (F2 and F5), and two

estimators (C1 and C2) proposed by Cook (1989) in a North Carolina State University PhD thesis.

The latter two estimators have the form

1 - 6k+l A

(3u,k = f3NAIVE1-6

where 6= na2/ Sx x and k is chosen to minimize an approximation to the mean squared error of

1.2

= 10gnA

10g(1/h)and

the estimator. The two estimators correspond to two estimates of the optimal k,

I (-nel-8) log 8)A og 2lognk1 = A

log(1/8)

In our simulation we employed a structural measurement error model

i = 1, ... ,100.

Four-hundred independent data sets were generated and analyzed. The SIMEX estimators were

computed with B = 100.

The results of the simulation study for the slope (f3u = 1) are displayed as kernel density

estimates in Figure 4. A normal kernel was used and the same bandwidth was used for all of the

density estimates. The common bandwidth was set to .3 x the median of the standard deviations

of all the estimators in the study.

There are six well-defined groups among the estimators. Groups I, II and VI have only a single

member each, .8u,TRUE' .8u,NAIVE and .8u,C1 respectively. Groups III and IV have four members

each, corresponding to the 22 factorial of location estimator (mean, median) X error type (lID,

NON-lID), for the linear and quadratic extrapolants respectively. Group V contains .8u,MOM'

12

Page 16: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

fiu,F2' fiu,F5' fiu,C2' and the two SIMEX estimators obtained by the nonlinear extrapolation of

the lID pseudo data using the mean and median location functionals. It also contains the SIMEX

estimators based on the NON-lID pseudo errors, since the latter are equivalent to the method-of-

moments estimator for simple linear regression.

The simulation sample (400) is not large enough to allow detailed comparisons between

estimators within groups. However, this was not our intent. Rather we wanted to demonstrate

the conservative behavior of the linear and quadratic extrapolants and the method-of-moments-like

performance of the nonlinear extrapolants, both of which are evident in Figure 4.

Multiple Linear Regression: Instead of presenting simulation results for the multiple linear

regression model, we show that the nonlinear SIMEX estimators based on NON-lID pseudo errors

are identical to the usual method-of-moments estimators. We adopt a different notation in this

section in order to exploit standard linear algebra results. Let Ui denote the true p-dimensional

predictor with one component, the first, measured with error, and let Xi denote the p X 1 observed

predictor. The vector of regression coefficients is {3.

Then for the NON-lID pseudo data it is readily shown that

Clearly /3( -1) is the common method-of-moments estimator (Fuller, 1987, p. 105). Now write

Then

/3(>') and (kt, ki? = XTy independent of >.. Solving this system

yields

showing that all of the components of /3(>') are functions of >. of the form a +b/(c +>.).

13

Page 17: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Multiple Linear Regression with Interaction: Next we consider the measurement error model

i = 1, ... ,200,

(4.2)

where 131 = 0, f3u = f3v = f3uv = 1.

The benchmark estimator for this model is the method-of-moments estimator

• TITf3MOM = (D D - C)- D Y,

where 13 = (f3}, f3u, f3v, f3uv)T, D is the 200 X 4 design matrix (1, U, V, UV) and

C - (~ ~- 0 0o Y

o 0 )o Y .o 0 'o (y2 +sir)

see Fuller (1984) and Hwang (1986).

A simulation study of this model produced results nearly identical to those obtained for the

simple linear model and were omitted for that reason. The TRUE and NAIVE estimators formed

distinct groups as did the the linear and quadratic extrapolant SIMEX estimators. The remaining

distinct group included the nonlinear extrapolant SIMEX estimators and the method-of-moments

estimator. The conservative behavior of the linear and quadratic extrapolants and the method-of-

moments-like performance of the nonlinear extrapolants was evident.

The simulation results were not unexpected for it is easily shown that for this model that ,81 (A)

and ,BV(A) converge in probability to 131 and f3v respectively; and that ,Bu(A) and ,BUV(A) converge

in probability to f3ul(2 +A) and f3uv 1(2 +A) respectively. Thus all of the components of f3(A) are

functions of Aof the form a +b1(c + A) and it follows that the nonlinear extrapolant versions of

,8SIMEX are asymptotically unbiased.

4.2. Generalized Least Squares Estimation of an Exponential Mean Model

The nonlinear extrapolant is exact for many linear regression models with least squares

estimation. We now present a class of nonlinear models for which it is exact.

Suppose that U is a scalar and that E(YIU) = exp(f31 + f3uU), with conditional variance

V(Y I U) = ",2 exp(rf31 + rf3uU) for some constants", and r. It follows from the appendix in

14

Page 18: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Stefanski (1989) that if (X, U) has a bivariate normal distribution and generalized least squares is

the method of estimation, then .B1(A) and .BU(A) consistently estimate

and

f3uu2

f3U(A) = ufr + (1 : A)U2

respectively, where Jiu =E(U), ufr =V(U) and u2 =VeX IU).

Clearly, the nonlinear extrapolant is asymptotically exact for estimating f3u and a little algebra

shows the same for f31 as well. In the latter case the parameters in the nonlinear extrapolant

function bear little resemblance to their counterparts in the linear model.

4.3. Logistic Regression

Multiple Logistic Regression: The model for the study is

Pr(Yi =11 Uj, Vi) '" F(f31 + f3uUj + f3vVi),

where f31 = -2, f3u = 1, f3v = .5.

i = 1, ... , 1500,

(4.3)

There are a number of estimators that have been proposed for this model. Only the so-called

sufficiency estimator of Stefanski and Carroll (1985, 1987) is known to be generally consistent and

asymptotically efficient in the absence of parametric distributional assumptions on U and thus we

included only it in our Monte Carlo study. It provides a benchmark for comparison just as the

methods-of-moments estimators did for the linear models simulations. The estimating equations for

the sufficiency estimator and its optimality properties are described in the previously cited papers.

Figures 5a-c display the results of the simulation study of this model. The SIMEX estimators

were computed with B = 50. Four hundred data sets were generated and analyzed. The kernel

density estimates were calculated as in the simulation study of the simple linear model.

Groups I and II correspond to the TRUE and NAIVE estimates respectively, Groups III and IV

to the linear and quadratic extrapolant estimators, and Group V contains the nonlinear extrapolant

SIMEX estimators and the sufficiency estimator. The groupings are well defined for the intercept

15

Page 19: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

and f3u, somewhat less so for f3v. Again we see the conservative behavior of the linear and nonlinear

extrapolants. More interesting is the near indistinguishability between the distributions of the

nonlinear extrapolants and the sufficiency estimator, the latter being asymptotically efficient among

all estimators that do not assume a parametric form for the distribution of U.

We performed a second simulation study of this model with the difference that the error in X as

a measurement of U was generated as a standardized (mean 0, variance 1) uniform random variable.

The SIMEX procedure generates normal errors in the simulation step, thus the issue of robustness

to nonnormal measurement errors naturally arises. We omitted the results of the second study

since they were nearly identical to those of the first. The second study provides some evidence that

the SIMEX method is robust to error distribution specification, although definitive statements are

not possible on the basis of one study. More evidence of robustness is given in Section 5.1.

Multiple Logistic Regression with Interaction: We now present the results from a simulation

study of the model (3.1) of Section 3. The SIMEX estimators were computed with B = 50. Four

hundred data sets were generated and analyzed. The kernel density estimates were calculated as

in the previous simulation studies.

Although estimates of the parameters in this model could in theory be analyzed with some

general approximate estimation procedures (Stefanski, 1985; Fuller, 1987, Ch. 3; Whittemore and

Keller, 1988; Carroll and Stefanski, 1990) we are not aware of any results specific to models with

interaction terms. Thus there is no benchmark estimator in this study.

The results of the simulation study are displayed in Figures 6a-d. A familiar pattern is apparent.

The linear, quadratic and nonlinear variants of the SIMEX estimators all reduce bias with the

nonlinear extrapolant yielding a nearly unbiased estimator of all the parameters.

4.4. Probit Regression

Probit regression provides an example of a model for which the nonlinear extrapolant is not

exact, but is sufficiently close to exact over a useful range of parameter values. The probit response

function, in combination with normally distributed predictor and measurement error allow for

analytic determination of the true extrapolant function. The calculations that follow are based

on a model slightly more general than probit regression. In particular, it is not necessary for the

response to have a binomial distribution.

16

Page 20: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Consider the regression model

E(Y IU) = al + a~C)(,81 + ,8uU)

where C)(.) is the standard normal distribution function and at, a~, ,81 and ,8u are scalar

parameters. For X = U + (1Z and X(A) = X + ..;>:(1Zb, where U, Z and Zb are independent

and normally distributed, the latter two having standard normal distributions independent of Y

and E(U) = J.Lu and V(U) = (1b, it follows that

E{y'l X(A)} = E{E(Y IU) IX(A)} =al +a~C){,81(A) + ,8U(A)U}

where

(4.4)

and

For probit regression al and a~ are set to 0 and 1 respectively.

Note that ,81 (-1) = ,81 and ,8u( -1) = ,8u. Thus this model illustrates the general theory of

Section 2.2. It also provides an example for which the nonlinear extrapolant is not asymptotically

exact. That is, neither ,81 (A) nor ,8u(A) is a function of Aof the form a +b/(c +A).

However, if (12,8b is "small," then D( A) ~ 1 and both ,81 (A) and ,8u(A) are very well approximated

by functions of the form a +b/(c + A). Methods based on the assumption that (12,8b is "small," are

common in binary regression measurement error models, especially in epidemiologic studies where

the assumption is often reasonable; see Carroll and Stefanski (1990). Thus it is to be expected that

SIMEX estimation with the nonlinear extrapolant would work well for probit regression in many

cases even though it is not exact.

In support of this claim we calculated an estimate of the asymptotic bias of the nonlinear SIMEX

estimator for the fit of a probit model to the Framingham data described in Section 5.1. With 'su(O)

and Pu( -1) denoting the naive estimate and the five-point ( Aj = 0(.5)2 ) nonlinear extrapolant

17

Page 21: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

estimate respectively, and ~u(-1) denoting the exact method-of-moments correction obtained using

(4.4), the percent reduction in bias achieved by the SIMEX estimate,

100 (1- I,B~(-1) - !u(-1) I) ,I !3u(O) - !3u( -1) I

exceeded 99.9%. The same was true for the intercept. Thus for a probit analysis of the Framingham

data there is no practical difference between the approximate SIMEX method and the exact method-

of-moments correction.

This finding will not surprise readers who are familiar with the literature on binary regression

measurement error models. Readers who are not familiar with the literature need only reconsider

the simulation results of Section 4.3 in light ofthe well-known similarities between probit and logistic

regression. Apart from a scale factor, logistic regression and probit regression produce essentially

identical parameter estimates for most data sets. It follows that the excellent performance of the

nonlinear extrapolant SIMEX estimators in the logistic model simulations in Section 4.3 carryover

to probit models.

5. EXPERIENCE with REAL DATA

The favorable simulation results are very encouraging. However, in applications to real data it

is generally the case that all assumptions are violated to some extent, and real data are seldom as

amenable to analysis as simulated data. Consequently impressions derived from simulation studies

are often optimistic.

The ideal test for our method would require several data sets having both X and U recorded

for all observations. Then the SIMEX estimators based on {Yi, Vi, Xi}} could be compared to the

estimators derived from {Yi, Vi,Udf. This would allow direct assessment of the procedure while

blocking for the effects of model misspecification and other violations of assumptions.

Alternatively we might envision a single data set with a large number of replicate measurements,

Xi,l,' .. ,Xi,m, for each observation. The SIMEX estimators based on data including only the /h

measurement, {Yi, Vi,Xi,j}f=l' could be compared with the estimator based on {Yi, Vi"Xi,.}}.

The latter being a reasonable substitute for the TRUE estimate.

Unfortunately, the two hypothetical experiments just described cannot be realized. However, in

the next section we approximate the second experiment by resampling from a data set containing

two replicate measurements.

18

Page 22: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

The examples that follow make use of a subset of the data from the Framingham Heart Study.

The variables are: Y, an indicator of coronary heart disease (CHD) during an eight year period

following the second exam; A, age at second exam; and PI and P2 , systolic blood pressure (SBP)

measurements at exams two and three (two and four years post baseline) respectively. Records with

missing information were dropped from the data set, leaving a total of 1660 observations. There

are 134 cases of CHD.

We decided to In-transform systolic pressure and represent age in decades. Thus in what follows

Xj = In(Pj), j = 1, 2, denote the measured predictors, and V = A/I0 is a covariate. The true

predictor U is thus defined as the natural logarithm of the true SBP, which itself is defined as a

long-term average.

A components-of-variance analysis produced the estimates 81 = .02171 and fr~ = .01563. Thus

the simple linear measurement error model correction for attenuation would be 1.389. This is

smaller than the value of 1.6 reported elsewhere (MacMahon et ai., 1990), probably because our

blood pressure measurements are from exams two years apart whereas MacMahon et ai. (1990) use

measurements four years apart. The value 1.389 is comparable to the correction for attenuation

found by Carroll and Stefanski (1992) using data from the same exams as MacMahon et ai. (1990),

but employing one measurement as an instrumental variable as a means to account for possible

trends over time.

The measurement error variance is estimated to be fr2 = .006078, which we assume has negligible

sampling variability in the analyses that follow.

5.1. Blood Pressure and Coronary Heart Disease

In our first set of analyses we ignore the covariate and consider simple logistic regression

modelling of CHD on SBP. Thus we have data {Yi, Xi,I, X i ,2H660 . Let Xi, denote the average

of Xi,l and X i ,2' In this study we are going to analyze resampled data sets {Yi, XtH660 where

Xi is randomly selected to be one of Xi,l or Xi,2 with equal probability. With this scheme we

create multiple data sets having the same responses and covariates, but with different measured

predictors. The measurement error variance in a measured predictor is that of a single measurement,

(J2 = .006078. One-hundred resampled data sets were generated.

Each resampled data set was analyzed using the nonlinear SIMEX procedure. That is, the

19

Page 23: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

naive estimate, /3u, and {/3U(Aj), Aj =.5, 1, 1.5, 2} were computed and used to fit the nonlinear

extrapolant a + b/(c + A). Thus for r = 1, ... ,100 we have the extrapolant curve /3~;>(A) for

-1 ~ A ~ 2. Evaluation at A = -1 yields the nonlinear SIMEX estimator for the r th generated

data set.

Figure 7a displays the pointwise mean extrapolant function l:~~OI /3~;>(A)/100 (solid line) and

pointwise fifth and ninety-fifth percentile functions (dotted lines).

The two points plotted at A = 0, -1 are the means of the naive and nonlinear extrapolant

estimators respectively. The latter point is a point on the mean extrapolant function, whereas the

former is not since the residuals ofthe nonlinear extrapolants at A= 0 are not identically zero. The

point plotted at A = -.5 is the estimate of f3u (= 3.377) obtained by fitting the logistic model to

{ Yi, Xi,. } and does not lie on the curve. The fact that it lies so close to the mean extrapolant curve

(= 3.383 at A= -.5) is a consequence of our data generation scheme and the virtual unbiasedness

of the nonlinear SIMEX estimator. That is, if our procedure produced exactly unbiased estimates,

and if an infinite number of resampled data sets had been analyzed, we would expect the curve to

pass through the point plotted at A= - .5.

To see this, consider that a generated predictor has the representation

where Ti is a Bernoulli variate with pr(Ti = 0) = .5 = pr(Ti = 1). Thus xt can be written as the

sum of U and two uncorrelated errors having zero means and equal variances of (12/2.

The second error can be regarded as the measurement error in Xt as a measure of Xi, .. Thus

we would expect the SIMEX procedure to be capable of eliminating the bias due to this error,

and return an unbiased estimator of the estimate of f3u obtained by fitting the logistic model to

{Yi, Xi,.}. Since this error has variance (12/2 and not (12, the extrapolant curve should be evaluated

at A= -.5. In other words, evaluation at A=-.5 "corrects" for the bias introduced by the second

error, (Ti - .5)(Xi,1 - X i,2)' Evaluation at A = -1 corrects for bias caused by the sum of the two

errors.

This simulation study demonstrates the near unbiasedness of the procedure with real data. In

this example unbiasedness means that the the corrected estimator, (A = -.5), derived from single

measurements should, on average, equal the estimate obtained by regressing on the average of the

20

Page 24: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

two measurements. If we had played this game, starting with m measurements instead of two,

again generating data sets with one randomly selected measurement, then extrapolation back to

A= -(m-1)jm would provide a nearly unbiased estimate of the naive parameter estimate obtained

by fitting the model to {Yi, Xi,.}.

The simulation study also provides additional evidence of robustness to the assumption of normal

errors. The error in Xt as a measure of Xi,. has a distribution with only two support points,

(t - .5)(XI - X 2 ), t= 0,1 and is far from normal. Yet the SIMEX procedure, which simulates the

effects of measurement error by adding normally distributed errors, proved to be virtually unbiased

for this particular case of nonnormal errors.

5.2. Bootstrapping the SIMEX Procedure

The simplicity of the SIMEX procedure is offset to some extent by the difficulty of developing

asymptotic distribution approximations for the estimators. However, the bootstrap is ideally suited

to the task of assessing sampling variability, apart from the obvious computational burden of nested

resampling schemes.

In this section we analyze the model

pr(Y = 1 IU, V) =F(f3I + f3uU + f3vV), (5.1)

use SIMEX to estimate the parameters from the available observed data, and use the bootstrap to

obtain standard errors and assess the sampling variability in estimated response curves.

Our naive estimate was obtained by fitting (5.1) to { Yi, Xi .. , Vi H660 • The error in Xi,. as a

measurement of Ui is now taken to be &2 = .003039 (= .006078/2).

Table 1 displays results from the SIMEX procedure using NON-lID pseudo errors and the sample

mean as location functional. Four-hundred bootstrap data sets were analyzed. Table entries are,

from top to bottom, point estimates, and the means and standard deviations of the 400 bootstrap

sample estimates. For comparison we note that the usual inverse-information standard errors of

the NAIVE estimates are 2.6594, .56436 and .10420 for f3I' f3u and f3v respectively.

Figure 7b displays estimated response functions for high-risk 50-year old males. The response

functions are graphed over the range (.K.,., .K.,.+3s) where s is the standard deviation of {Xi ,.H660 •

The two solid curves graph the NAIVE and NONLINEAR SIMEX response curves. The LINEAR

and QUADRATIC SIMEX curves are graphed with dotted and dashed lines respectively. The

21

Page 25: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

similarity of the SIMEX response functions is due to the small measurement error (q2 = .003039,

with a linear correction for attenuation = 1.19). For measurement error this small the extrapolant

curves are nearly linear and the three methods of extrapolation produce similar results.

Figure 7c displays graphs of the NONLINEAR SIMEX response function (solid line) and selected

bootstrap percentiles enclosing central regions of 90% (dotted), 75% (dashed) and 50% (close dots).

5.3. Exploratory Data Analysis with Measured Predictors

Our final example was chosen to emphasize the generality and flexibility of the SIMEX procedure

in dealing with nonstandard models. Starting with the naive fit of the model (5.1), we considered

the addition of interaction terms of the form Xr.~ V{2 for small integer powers ({-I, 0, 1}) of Tl

and r2. Taking T1 = r2 = -1 produced the greatest decrease in the deviance, 7.14, although the

improvement in the fit is only marginally significant when compared to a Chi-Squared distribution

with three degrees of freedom (p-value ~ .075). However, the reported p-value is conservative

since three is only an approximate degrees of freedom; our model was fit by searching over the

two-dimensional finite set of powers, (T}, T2), and the one-dimensional continuous coefficient space.

A model with the measured predictor appearing both on its nominal scale and as an interaction on

an inverse scale provides a nice vehicle for illustrating the flexibility of the SIMEX procedure.

Therefore we consider fitting the model

pr(Y = 11 U, V) =F ((31 +(3uU +(3v V +(3uv u1v) , (5.2)

and using the SIMEX procedure to correct for the effects of measurement error.

Table 2 displays coefficient estimates from the SIMEX procedure employing the NON-lID pseudo

errors and the sample mean as location estimator. The inclusion of the interaction term induces

collinearity, making it difficult to interpret individual coefficients. We present instead a graph

similar to that in Figure 7b. Figure 7d displays plots of the NAIVE and NONLINEAR response

curves for two models, with and without the inverse interaction term in (5.2). The response

functions for the models with and without the interaction term are graphed with solid lines and

dotted lines respectively. For both models the SIMEX response function dominates the NAIVE

response function over the range plotted.

6. SUMMARY

We have shown how the results of several coordinated simulation studies, i.e., the {8(,x): ,x> O},

22

Page 26: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

can be used to obtain unbiased or nearly unbiased estimators in nonlinear measurement error models

when the measurement error variance is specified. Our experience suggests that employing NON­

lID pseudo errors and using the sample mean as the location estimator works as well as any other

combination of error type and location functional studied, at least in large samples.

The success of the method is due in large part to the empirically and theoretically supported

general applicability of the nonlinear extrapolant model a +b/(c+ 'x). The appeal of the method is

that the appropriate values of the parameters a, b and c, which for nonlinear models are no longer

simple functions of the data as they are for linear models, are determined by Monte Carlo methods,

thereby avoiding mathematical technicalities and/or resorting to approximations of any type.

It seems likely that the basic method can be improved through the use of more creative

resampling schemes. We are currently exploring the possibility of combining bootstrap resampling

with the Monte Carlo pseudo-error generation in a way that produces both the point estimate and

a standard error without the high computational overhead of nested resampling. Also, it may be

advantageous to use ,x values other than {.5, 1, 1.5, 2}, say by solving an optimal design problem

for the mean model a +b/(c + ,x).

Finally we note that the method is applicable to functional as well as structural measurement

error models. This is evident from the results in Sections 4.1, 4.2 and, though not as obviously,

from the resampling simulation in Section 5.1. In the latter case, 1\. remained fixed, as in a

functional model. Thus the method does not depend on the nature (random or fixed) of the so­

called true values Uj. This is not true of other methods that explicitly model, either parametrically

or non-parametrically, the Uj as random variables.

For example, the regression-calibration method (Prentice, 1982; Fuller, 1987, p 261; Rosner et al.,

1989,1990; Rudemo et al., 1989; GIeser, 1990; Carroll and Stefanski, 1990; and Pierce et aI., 1991)

depends on (a model of) E(U IX, V). The importance of this is that an estimate of E(U IX, V)

obtained from external validation/replication data may not "match" the data to be analyzed in the

sense that the regressions of U on (X, V) in the validation and target populations need not be equal.

However, methods that depend on external validation/replication data only through estimates of

(72 are generally more robust to differences in the validation and target population distributions.

This is obviously true, for example, when measurement error is due entirely to instrument error

23

Page 27: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

and the error variance is independently assessed via a designed experiment.

REFERENCES

Carroll, R. J. & Stefanski, 1. A. (1990). "Approximate quasilikelihood estimation in models with

surrogate predictors." Journal of the American Statistical Association, 85, 652-663.

Carroll, R. J. & Stefanski, 1. A. (1992). "Meta-Analysis, Measurement Error and Correction for

Attenuation." Statistics in Medicine, to appear.

Fuller, W. A. (1984). "Measurement Error Models With Heterogeneous Error Variances," in Topics

in Applied Statistics, eds. Y. P. Chauby and T. D. Dwivedi, Montreal: Concordia University,

pp. 257-289.

Fuller, W. A. (1987). Measurement Error Models. Wiley, New York.

Gieser, 1. J. (1990). "Improvements of the naive approach to estimation in nonlinear errors­

in-variables regression models," in Statistical Analysis of Measurement Error Models and

Application, eds. P. J. Brown and W. A. Fuller, American Mathematics Society, Providence.

Hwang, J. T. (1986). "Multiplicative Errors-In-Variables Models with Applications to the Recent

Data Released by U. S. Department of Energy." Journal of the American Statistical Associa­

tion, 81, 680-688.

MacMahon, S., Peto, R., Cutler, J., Collins, R., Sorlie, P., Neaton, J., Abbott, R., Godwin, J.,Dyer, A., & Stamler, J. (1990). "Blood Pressure, Stroke and Coronary Heart Disease: Part 1,

Prolonged Differences in Blood Pressure: Prospective Observational Studies Corrected for the

Regression Dilution Bias." Lancet, 335, 765-774.

Palca, J. (1990a). "Getting to the Heart of the Cholesterol Debate." Science, 24, 1170-71.

Palca, J. (1990b). "Why Statistics May Understate the Risk of Heart Disease." Science, 24,1171.

Pierce, D. A., Stram, D.O., Vaeth, M., & Schafer, D. (1991). "Some insights into the errors

in variables problem provided by consideration of radiation dose-response analyses for the

A-bomb survivors." Journal of the American Statistical Association, 87, 351-359.

Prentice, R. L. (1982). "Covariate measurement errors and parameter estimation in a failure time

regression model." Biometrika, 69, 331-342.

Rosner, B., Willett, W. C. & Spiegelman, D. (1989). "Correction of logistic regression relative risk

estimates and confidence intervals for systematic within-person measurement error." Statistics

in Medicine, 8, 1051-1070.

Rosner, B., Spiegelman, D. & Willett, W. C. (1990). "Correction of logistic regression relative

risk estimates and confidence intervals for measurement error: the case of multiple covariates

measured with error." American Journal of Epidemiology, 132, 734-745.

24

Page 28: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Rudemo, M., Ruppert, D. & Streibig, J. C. (1989). "Random effect models in nonlinear regression

with applications to bioassay." Biometrics, 45, 349-362.

Stefanski, 1. A. (1985). "The effect of measurement error on parameter estimation." Biometrika,

72, 583-592.

Stefanski, L. A. & Carroll, R. J. (1985). "Covariate Measurement Error in Logistic Regression."

The Annals of Statistics, 13, 1335-1351.

Stefanski, L. A. & Carroll, R. J. (1987). "Conditional Scores and Optimal Scores for Generalized

Linear Measurement-Error Models." Biometrika, 74, 703-716.

Stefanski, L. A. (1989). "Correcting Data for Measurement Error in Generalized Linear Models."

Communications in Statistics, Theory and Methods, 18, 1715-1733.

Whittemore, A. S. & Keller, J. B. (1988). "Approximations for Regressions with Covariate Mea­

surement Error." Journal of the American Statistical Association, 83, 1057-1066.

25

Page 29: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

TABLE 1: COMPARISON OF ESTIMATORS. This simulation is described in Section 5.2.

Table entries are from top to bottom, point estimates, bootstrap means and bootstrap standard

errors. Inverse information standard errors for the NAIVE estimates are 2.6594, .56436 and .10420

for f3I' f3u and f3v respectively.

NAIVE LINEAR QUADRATIC NONLINEAR

f3I -17.244 -18.603 -18.880 -19.003

-17.292 -18.640 -18.904 -19.015

(2.5771) (2;8377) (2.8821) (2.8967)

f3u 2.5075 2.7998 2.8598 2.8868

2.5144 2.8046 2.8619 2.8862

(.54563) (.60394) (.61420) (.61782)

f3v .52814 .51292 .50948 .50771

.52813 .51283 .50943 .50773

(.098739) (.10028) (.10068) (.10092)

TABLE 2: COMPARISON OF ESTIMATORS. NAIVE and SIMEX estimates for the modelwith inverse interaction (5.2).

NAIVE LINEAR QUADRATIC NONLINEAR

f3I 6.1604 5.1932 4.9685 4.8446

f3u 1.0654 1.3413 1.3974 1.4222

f3v -1.1709 -1.2191 -1.2260 -1.2278

f3uv /100 -1.8645 -1.8995 -1.9032 -1.9038

Page 30: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Figures la-d. Plots of 9(A) versus A for the model (3.1) of Section 3. Linear, quadratic and

nonlinear extrapolants indicated by dotted, dashed and solid lines respectively.

Page 31: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

LO(,0

..--(/)1

-+-Jc°LO0" ..'

. "'h0.....-- ;01 ...····/L

-+-JX LOWro.

(31 ==-2.00

-.5 0 .5 1

A1.5 2 2.5

Page 32: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

enen·+..JO

c ."0 ......

O 01 ": •••••~ ........,

...~0-oLL.()

+..J •X O

WI").o

(3u 1.00

-.5 0 .5 1

A1.5 2 2.5

Page 33: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

ato.a

(f)-+-J aCL()

06o ....;;;o..~ /o .., /'La

-+-JXa

Wf"l.a

(3v 0.25

-.5 0 .5 1 1.5 2 2.5

Page 34: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

0 0 ..... '..... "o...ro ,00 ....L .

+--J 0

x~W o.

o

f3uv== 0.25

".

".

.5 1 1.5 2 2.5

Page 35: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Figure 2. Resiudal plots of the quadratic and nonlinear extrapolant fits from Figure lb. Sin-like

curve, QUADRATIC model residuals.

Page 36: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

(3u Residual Plots0L{)

v0~L{)

N

X

(/)0

0::J

--0.- L{)(/)N0)1

n:::0L{)

I 0.0 0.5 1.0 1.5 2.0 2.5A

Page 37: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Figure 3. Estimated Response Functions for Example in Section 3. Solid, NAIVE and estimandj

long dashes, LINEARj short dashes, QUADRATICj dotted, NONLINEAR.

Page 38: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Esti mated Res po nse Curves

mo

~r-----+-' •0_ 0

.o

-2 -1 au

1 2 3 4

Page 39: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Figure 4. Kernel density estimates of the distributions of the slope (f3u) estimators in the simple

linear regression simulation.

Page 40: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

(3u 1.00

v

II(f)

GJ+-' L()

Or-r)

E+-' L()(f) •

w N

~L()+-' .• (f) .-

CGJL()

0 0~~~~~~~~-=:J0.0 0.5 1.0

(31.5 2.0

Page 41: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Figures Sa-c. Kernel density estimates of the distributions of the regression parameters in the

multiple logistic regression model simulation, normal measurement error.

Page 42: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

(f)

Q)o-+-' •o~

E.- 0-+-' •(f)1'0

Wo

~ .-+-,N.-(f)

C~Q)~

o

(31 ==-2.00

-2.5 -2.0

(3-1 .5

Page 43: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

{3u 1.00

0.2 0.6 1.0

{31.4 1.8

Page 44: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

(/)

Q)~-+-' L.{)

oE~.- ~-+-'(/)0

W .I"')

~o-+-' .. - N(/)

CoQ) .

0"--

(3v 0.50

......../ / .. 1,..... ....

j ,I :.: __.-"'-lo. '\

. 'I :

/ ,I ./. II :

I,I ./. II :

1,1 ....:. II :/I' :::

. II :

/,1 ...... It .

.~/h :.:

1 ..

II ._._._.111- -­IV------v---

0.2 0.3 0.4 0.5 0.6 0.7 0.8(3

Page 45: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Figures 6a-d. Kernel density estimates of the distributions of the regression parameters in the

multiple logistic regression with interaction model simulation.

Page 46: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

(31 ==-2.00

(f)

0)0-+-J •o~

E.- 0-+-J •(f)l"l

Wo

~ .-+-IN.-(f)

C~O)~

o

-2.5 -2.0

(3-1 .5

Page 47: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

(J)

Q)o+-' .ovE.- 0+-J •(J)1'0

Wo

~ .+-,N.-(J)

C~Q)r-o

f3u 1.00

II

0.40.6 0.8 1.0 1.2 1.4 1.6

(3

Page 48: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

(3v 0.25

([)

Q)o+-' .o~

E.- 0+-' .([)I"')

w0

~ .+-,N.-([)

c~Q)~

0

0.0 0.2 0.4 0.6 0.8

(3

Page 49: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

(/)

Q)~+-' L{)oE~.-v+-'(/)0

W .I"")

~o+-' .. - N(/)

CoQ).

O-r-

f3uv== 0.25

II

0.0 0.2 0.4 0.6 0.8 1.0(3

Page 50: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

Figure 7a-d. Framingham Data Example. a. Mean extrapolant function with 5th and 95th %­tiles. b. Estimated Response functions for high-risk individuals. Solid, NAIVE and NONLINEAR;

dotted, LINEAR; dashed, QUADRATIC. c. Estimated NONLINEAR response function with

bootstrap percentiles (50, 75 90). d. Estimated NAIVE and NONLINEAR response functions

for high-risk individuals and models with and without inverse interaction term. Dashed, without

interaction; solid, with interaction.

Page 51: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

(3u

-+--' N "-C~

"0- "-0 "-

• 0...V '...;.OF"'") "- : .........

L "- .........

-+--' ..................

.........X ......... ,W<.O ......... : ,. :, ..........

N ......... ....................

C, ..........,

0 ....................

Q)ro ---:2. -r-

-.5 0 .5 1 1.5 2 2.5A

Page 52: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

NN.a

oor-.o

Esti mated Res po nses

4.90 5.00 5.10 5.20Ln(SBP)

Page 53: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

CDN.0

,--......• 0

:LOC>U~'--/0

L

0.-

0r-.0

Bootstrap Percentiles

. /.. ' /

.' / ..... / ....

. ' / ....

. ..;/ ...' / .....

. / ...... ' / .....

. / .'

.. ;;~</ ~~~~-~.. " . ~ ,. ",

...; ~.:::...... ... ;'.... . ;...;...~.:~.:~ ~ .~ .....

::~~.;;~~~::;;;~~ .

:-: ...

4.90 5.00 5.10 5.20

Ln(SBP)

Page 54: SIMULATION-EXTRAPOLATION ESTIMATION IN PARAMETRIC

NN.o

o.

o

Model Comparison

//

//

/ //

/ // /

/ // /

/ // /

/ ;'

/ ;';'

/ ;'

/ ;';' ;'

/ ;';' ;'

;' ",

;,,,,;';'",

;'",;"

~",

~,,~

",

4.90 5.00 5.10

Ln(SBP)5.20