literature review, december 2003-march 2004

4
PHARMACEUTICAL STATISTICS Pharmaceut. Statist. 2004; 3: 143–146 (DOI:10.1002/pst.117) Literature review, December 2003– March 2004 Simon Day 1, * ,y and Meinhard Kieser 2 1 Medicines and Healthcare Products Regulatory Agency, Room 13-205, Market Towers, 1 Nine Elms Lane, London SW8 5NQ, UK 2 Department of Biometry, Dr Willmar Schwabe Pharmaceuticals, Karlsruhe, Germany INTRODUCTION We see a slight change to the line-up of journals being published this year. From the end of 2003, The Statistician (Series D of the Journal of the Royal Statistical Society) will no longer be published. The coverage of the other three series has been revised, and whilst Series B remains about the same, most of the content of Series D will now go to either Series A or Series C (Applied Statistics). In addition, a new journal appears, and a story behind it: Controlled Clinical Trials, whilst owned and published by Elsevier, was the official journal of the Society for Clinical Trials. The Society has now decided to publish its own journal (in collaboration with Hodder Arnold) called Clinical Trials. The first issue (of six each year) has appeared. We will be including both Controlled Clinical Trials and Clinical Trials in these reviews. This review, with its slightly different coverage, includes the following journals received during the period from the middle of December 2003 to the middle of March 2004: * Applied Statistics, volume 53, part 1. * Biometrical Journal, volume 45, part 8 and volume 46, part 1. * Biometrics, volume 59, part 4. * Biostatistics, volume 5, part 1. * Clinical Trials, volume 1, number 1. * Drug Information Journal, volume 38, part 1. * Journal of the American Statistical Association, volume 98, part 4. * Journal of the Royal Statistical Society, Series A, volume 167, part 1. * Statistics in Medicine, volume 23, parts 1–6. * Statistical Methods in Medical Research, volume 13, parts 1, 2. * The American Statistician, volume 57, part 4. SELECTED HIGHLIGHTS FROM THE LITERATURE We welcome the new journal, Clinical Trials, and give references here to all of the major papers published so that you can see the initial breadth of coverage: * Goodman SN. Editorial: A birthday, an anniversary, and an agenda for Clinical Trials. Clinical Trials 2004; 1:1–2. * Burke DS. Lessons learnt from the 1954 field trial of poliomyelitis vaccine. Clinical Trials 2004; 1:3–5. * Ellenberg SS. Editorial: Monitoring data on data monitor- ing. Clinical Trials 2004; 1:6–8. * Lavori PW, Dawson R. Dynamic treatment regimes: practical design considerations. Clinical Trials 2004; 1:9–20. * Xie H, Heijtman DF. Sensitivity analysis of causal inference in a clinical trial subject to crossover. Clinical Trials 2004; 1:21–30. * Scott RB, Farmer E, Smiton A, Tovey C, Clarke M, Carpenter K. Methodology of neuropsychological research in multicentre randomized clinical trials: a model derived from the International Subarachnoid Aneurysm Trial. Clinical Trials 2004; 1:31–39. * Kiri A, Tonascia S, Meinert CL. Treatment effects monitoring committees and early stopping in large clinical trials. Clinical Trials 2004; 1:40–47. Copyright # 2004 John Wiley & Sons, Ltd. Received \60\re /teci *Correspondence to: Simon Day, Medicines and Healthcare Products Regulatory Agency, Room 13-205, Market Towers, 1 Nine Elms Lane, London SW8 5NQ, UK y E-mail: [email protected]

Upload: simon-day

Post on 06-Jul-2016

216 views

Category:

Documents


2 download

TRANSCRIPT

PHARMACEUTICAL STATISTICS

Pharmaceut. Statist. 2004; 3: 143–146 (DOI:10.1002/pst.117)

Literature review, December 2003–

March 2004

Simon Day1,*,y and Meinhard Kieser2

1Medicines and Healthcare Products Regulatory Agency, Room 13-205, Market Towers,

1 Nine Elms Lane, London SW8 5NQ, UK2Department of Biometry, Dr Willmar Schwabe Pharmaceuticals, Karlsruhe, Germany

INTRODUCTION

We see a slight change to the line-up of journals being published

this year. From the end of 2003, The Statistician (Series D of the

Journal of the Royal Statistical Society) will no longer be

published. The coverage of the other three series has been

revised, and whilst Series B remains about the same, most of the

content of Series D will now go to either Series A or Series C

(Applied Statistics). In addition, a new journal appears, and a

story behind it: Controlled Clinical Trials, whilst owned and

published by Elsevier, was the official journal of the Society for

Clinical Trials. The Society has now decided to publish its own

journal (in collaboration with Hodder Arnold) called Clinical

Trials. The first issue (of six each year) has appeared. We will be

including both Controlled Clinical Trials and Clinical Trials in

these reviews.

This review, with its slightly different coverage, includes the

following journals received during the period from the middle

of December 2003 to the middle of March 2004:

* Applied Statistics, volume 53, part 1.* Biometrical Journal, volume 45, part 8 and volume 46,

part 1.* Biometrics, volume 59, part 4.* Biostatistics, volume 5, part 1.* Clinical Trials, volume 1, number 1.* Drug Information Journal, volume 38, part 1.* Journal of the American Statistical Association, volume 98,

part 4.

* Journal of the Royal Statistical Society, Series A, volume

167, part 1.* Statistics in Medicine, volume 23, parts 1–6.* Statistical Methods in Medical Research, volume 13,

parts 1, 2.* The American Statistician, volume 57, part 4.

SELECTED HIGHLIGHTS FROM THE

LITERATURE

We welcome the new journal, Clinical Trials, and give

references here to all of the major papers published so that

you can see the initial breadth of coverage:

* Goodman SN. Editorial: A birthday, an anniversary, and

an agenda for Clinical Trials. Clinical Trials 2004; 1:1–2.* Burke DS. Lessons learnt from the 1954 field trial of

poliomyelitis vaccine. Clinical Trials 2004; 1:3–5.* Ellenberg SS. Editorial: Monitoring data on data monitor-

ing. Clinical Trials 2004; 1:6–8.* Lavori PW, Dawson R. Dynamic treatment regimes:

practical design considerations. Clinical Trials 2004; 1:9–20.* Xie H, Heijtman DF. Sensitivity analysis of causal inference

in a clinical trial subject to crossover. Clinical Trials 2004;

1:21–30.* Scott RB, Farmer E, Smiton A, Tovey C, Clarke M,

Carpenter K. Methodology of neuropsychological research

in multicentre randomized clinical trials: a model derived

from the International Subarachnoid Aneurysm Trial.

Clinical Trials 2004; 1:31–39.* Kiri A, Tonascia S, Meinert CL. Treatment effects

monitoring committees and early stopping in large clinical

trials. Clinical Trials 2004; 1:40–47.

Copyright # 2004 John Wiley & Sons, Ltd.Received \60\re /teci

*Correspondence to: Simon Day, Medicines and HealthcareProducts Regulatory Agency, Room 13-205, Market Towers,1 Nine Elms Lane, London SW8 5NQ, UK

yE-mail: [email protected]

* Sydes MR, Altman DG, Babiker AB, Parmer MKB,

Spiegelhalter DJ, the DAMOCLES group. Reported use

of data monitoring committees in the main published

reports of randomized controlled trials: a cross-sectional

study. Clinical Trials 2004; 1:48–59.* Sydes MR, Spiegelhalter DJ, Altman DG, Babiker AB,

Parmer MKB, the DAMOCLES group. Systematic quali-

tative review of the literature on data monitoring commit-

tees for randomized controlled trials. Clinical Trials 2004;

1:60–79.* Eldridge SM, Ashby D, Feder GS, Rudnicka AR,

Ukoumunne OC. Lessons for cluster randomized trials in

the twenty-first century: a systematic review of trials in

primary care. Clinical Trials 2004; l:80–90.* The Complications of Age-Related Macular Degeneration

Prevention Trial study group. The Complications of Age-

Related Macular Degeneration Prevention Trial (CAPT):

rationale, design and methodology. Clinical Trials 2004;

1:91–107.* Carino T, Sheingold S, Tunis S. Using clinical trials as a

condition of coverage: lessons from the National Emphy-

sema Treatment Trial. Clinical Trials 2004; 1:108–121.* Dawson L. The Salk Polio Vaccine Trial of 1954: risks,

randomization and public involvement in research. Clinical

Trials 2004; 1:122–130.* Marks HM. A conversation with Paul Meier. Clinical Trials

2004; 1:131–138.

The themes for Statistical Methods in Medical Research were:

* Part 1: Nonparametric longitudinal data analysis (pp 1–82).* Part 2: Outcome measures in clinical trials (pp 87–176).

Of particular interest from the latter issue are the following:

* Fairclough DL. Patient reported outcomes as endpoints in

medical research. Statistical Methods in Medical Research

2004; 13:115–138.* Cook J, Drummond M, Heyse JF. Economic endpoints in

clinical trials. Statistical Methods in Medical Research 2004;

13:157–176.

One special issue of Statistics in Medicine has appeared:

* Kryscio RJ, Schmitt FA (eds). Statistical methodology in

Alzheimer’s disease research II. Statistics in Medicine 2004;

23:163–367.

Phase I/II

We regularly review ‘continual reassessment’ methods in this

column – and here is another, but on a slightly different theme.

Here, finding the optimum dose is not the objective, but finding

the optimal dose duration is (TITE-CRM stands for ‘time-to-

event’ continuous reassessment method). In principle, the

problems are the same as those for dose finding, but different

design considerations are necessary to ensure the duration of

the study does not become excessive.

* Braun TM, Levine JE, Ferrara JLM. Determining a

maximum tolerated cumulative dose: dose reassignment

within the TITE-CRM. Controlled Clinical Trials 2003;

24:669–681.

At a slightly later stage in development, two-stage designs are

common in cancer: recruit a small number of patients and then

either stop (essentially for futility), or stop (and go straight to

phase III), or continue recruiting to obtain a little more

information. Jung et al. look at a variety of types of designs

with this underlying theme and a develop a strategy to search

for the best design in any particular situation:

* Jung S, Lee T, Kim K, George SL. Admissible two-stage

designs for phase II cancer clinical trials. Statistics in

Medicine 2004; 23:561–569.

General study design

This paper just has to be admired for courage: a 26 factorial

design of single or combined treatments of antiemetics. No

doubt statisticians designing agricultural field trials would not

consider this anything special – but in medicine it might be

unique.

* Apfel CC, Korttila K, Abdalla M, Biedler A, Kranke P,

Pocock SJ, Roewer N. An international multicenter

protocol to assess the single and combined benefits of

antiemetic interventions in a controlled clinical trial of a

2� 2� 2� 2� 2� 2 factorial design (IMPACT). Controlled

Clinical Trials 2003; 24:736–751.

Non-inferiority

The decision that the significance level for a one-sided

significance test should be half that of the two-sided level for

showing departure from the null hypothesis in either direction is

reasonably well agreed. Certainly there can be arguments

against the simple splitting of alpha in half, but if we do not

take this approach then we are at risk of having inconsistent

inferences – particularly if we want to switch from non-

inferiority to superiority. However, the following does look at

cases where unequal splits may be made for one-sided tests and

tries to defend such an approach:

* Neuh.aauser M. The choice of a for one-sided tests. Drug

Information Journal 2004; 38:57–60.

Missing data

Selecting endpoints that are easy to measure is a good strategy

for helping to reduce missing values – provided, of course, that

the endpoint chosen is still of clinical relevance. The following

Copyright # 2004 John Wiley & Sons, Ltd. Pharmaceut. Statist. 2004; 3: 143–146

LITERATURE REVIEW144

paper has the ‘originally chosen endpoint’ for some patients and

the newly selected, simpler, measurement for most. Good

overlap allowed the authors to explore the potential for

differences between the patients who had missing data and

those who did not. In their example (acupuncture to treat

migraine and chronic tension headaches) they found good

agreement, so were able to use the simpler measurement to

impute missing values. Each therapeutic area – perhaps each

trial – may need to be considered on its own merits.

* Vickers AJ, McCarney R. Use of a single global assessment

to reduce missing data in a clinical trial with follow-up at

one year. Controlled Clinical Trials 2003; 24:731–735.

Multiple imputation is an increasingly popular method to

deal with missing data, not least due to its implementation in

software packages. Some packages that support multiple

imputation assume that the joint distribution of the variables

in the data set is multivariate normal. Applying this approach

to possibly missing data which are not normal but, for example,

binary will lead to implausible values. Rounding the imputed

normal is one of the proposed solutions. The following article

cautions against this method because it can introduce bias to

the parameter estimate:

* Horton NJ, Lipsitz SR, Parzen M. A potential for bias when

rounding in multiple imputation. American Statistician

2003; 57:229–232.

Surrogate endpoints

Validating surrogate endpoints seems a controversial topic. If

done well, it is probably valuable and reliable; if done badly – as

is often the case – it can be useless and potentially misleading.

The following paper is very thorough, looking at the problem of

using an ordinal or binary endpoint (tumour progression) as a

surrogate for survival in studies for advanced colorectal cancer:

* Burzykowski T, Molenberghs G, Buyse M. The validation

of surrogate end points by using data from randomized

clinical trials: a case-study in advanced colorectal cancer.

Journal of the Royal Statistical Society, Series A 2004;

167:103–124.

The literature on validation of surrogate endpoints tends to

focus on univariate responses. The paper by Alonso et al.

introduces a method for the situation where both the surrogate

and the true endpoint are measured repeatedly over time. The

new methodology is illustrated with meta-analysis data from

schizophrenia trials.

* Alonso A, Geys H, Molenberghs G, Kenward MG,

Vangeneugden T. Validation of surrogate markers in

multiple randomized clinical trials with repeated measure-

ments. Biometrical Journal 2003; 45:931–945.

Sample size calculation and recalculation

The Pearl Index is a widely used measure to describe the

reliability of contraceptive methods. Guidelines on the clinical

investigation of steroid contraceptives in women require that,

for key clinical studies in this field, the width of the 95%

confidence interval for the Pearl Index should not exceed a

given margin. The following paper develops the related sample

size calculation methodology:

* Benda N, Gerlinger C, van der Meulen EA, Endrikat J.

Sample size calculation for clinical studies on the efficacy of

a new contraceptive method. Biometrical Journal 2004;

46:141–150.

Papers on the internal pilot study design have almost a

regular place in this review. The currently available procedures

use only the data of those patients for sample size recalculation

who have already completed the study. However, as recruit-

ment is usually ongoing when the re-estimation of sample size is

carried out, a number of patients will not have completed the

whole treatment phase but will only have reached some

intermediate point of the follow-up time. The following paper

shows how inclusion of these data leads to an improvement of

the precision of the sample size estimate:

* W .uust K, Kieser M. Blinded sample size recalculation for

normally distributed outcomes using long- and short-term

data. Biometrical Journal 2003; 45:915–930.

Interim analyses

The following paper looks at flexible approaches to stopping

rules in group sequential trials. This is not a new topic, but the

authors show how their approaches compare (in some cases

exactly and in others not) with other existing methods:

* Burington BE, Emerson SS. Flexible implementations of

group sequential stopping rules using constrained bound-

aries. Biometrics 2003; 59:770–777.

Regulatory issues

Although available on the EMEA website, Statistics in

Medicine has published the recently adopted (May 2003)

guidance on adjustment for baseline covariates:

* Committee for Proprietary Medicinal Products (CPMP).

Points to consider on adjustment for baseline covariates.

Statistics in Medicine 2004; 23:701–709.

The following is a paper that matches perfectly with this topic:

conventional adjustment techniques treat the covariate as

binary, nominal or continuous, and this may be problematic

when applied to ordinal covariates. Berger et al. propose new

methods to adjust for ordinal covariates that do not require a

LITERATURE REVIEW 145

Copyright # 2004 John Wiley & Sons, Ltd. Pharmaceut. Statist. 2004; 3: 143–146

dichotomy of an ordinal variable or to impose a continuous

variable where none exists.

* Berger VW, Zhou YY, Ivanova A, Tremmel L. Adjusting

for ordinal covariates by inducing a partial ordering.

Biometrical Journal 2004; 46:48–55.

Miscellaneous

‘Large, simple trials’ seems to be a phrase commonly used to

describe trials. No doubt behind such a description is a

multitude of complexities and simplifications. Although tradi-

tionally the domain of academic clinical trialists, companies do

sometimes get involved and regulatory authorities sometimes

consider such trials and results. Collins et al. describe one

‘large, simple trial’, the DIG trial. A special issue of Controlled

Clinical Trials was devoted to various aspects of it, this being an

overview:

* Collins JF, Egan D, Yusuf S, Garg R, Williford WO, Geller

N on behalf of the DIG Investigators. Overview of the DIG

trial. Controlled Clinical Trials 2003; 24:269S–276S.

When a patient is treated with an active drug in clinical

practice or in a clinical trial, an improvement may be attributed

to the active chemical component of the drug, to non-specific

aspects of the treatment (‘placebo effect’), or a combination of

both. Tarpey et al. claim to provide a method to identify and

separate placebo responders and true drug responders among

drug-treated patients. The idea is to estimate outcome profiles

for each subject, to determine representative profiles for each

category of response, and to classify the patients accordingly.

* Tarpey T, Petkova E, Ogden RT. Profiling placebo

responders by self-consistent partitioning of functional

data. Journal of the American Statistical Association 2003;

98:850–858.

The sign test is a suitable and frequently used non-parametric

procedure to assess the median difference of paired samples

when the distribution is not symmetrical. In case of ‘zero’

differences, the sign test may not perform well. Proposed

solutions result in an increased alpha level (when the ties are

excluded or equally assigned as positive or negative) or are too

conservative. The following paper proposes a simple modifica-

tion that shows better performance than conventional methods:

* Fong DYT, Kwan CW, Lam KF, Lam KSL. Use of the sign

test for the median in the presence of ties. American

Statistician 2003; 57:237–240.

Pharmacoepidemiology ought to begin way back in the

development phase of a drug, but, perhaps inevitably, most

interest arises late in the day. Gutterman gives an overview of

safety assessment of newly approved products, acknowledging

the gaps that may have arisen during the clinical trials for

registration. She uses healthcare claims databases to look at the

impact of drugs on a broad population and discusses the pros

and cons of such databases:

* Gutterman EM. Pharmacoepidemiology in safety evalua-

tions of newly approved medications. Drug Information

Journal 2004; 38:61–67.

We rarely (if ever, yet) highlight papers concerned with

production, so we are pleased to close this review with such an

example. This paper uses quality control of a pill production

process to investigate the strategy of 100% sampling, rejecting

defective items, and then sampling again. The resampling is

because it is recognized that the inspection process is not perfect

and defective items may be inadvertently passed. Of course,

such a procedure is not confined to the production end of drug

development, so there may be lessons for many of us to learn.

* Gasparini M, Nusser H, Eisele J. Repeated screening with

inspection error and no false positive results with applica-

tion to pharmaceutical pill production. Applied Statistics

2004; 53:51–62.

Copyright # 2004 John Wiley & Sons, Ltd. Pharmaceut. Statist. 2004; 3: 143–146

LITERATURE REVIEW146