literature review september–december 2004

3

Click here to load reader

Upload: simon-day

Post on 06-Jul-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Literature review September–December 2004

PHARMACEUTICAL STATISTICS

Pharmaceut. Statist. 2005; 4: 77–79

Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/pst.159

Literature Review September–December 2004

Simon Day1,*,y and Meinhard Kieser2

1Medicines and Healthcare products Regulatory Agency, Room 13-205, Market Towers,

1 Nine Elms Lane, London SW8 5NQ, UK2Department of Biometry, Dr Willmar Schwabe Pharmaceuticals, Karlsruhe, Germany

INTRODUCTION

This review covers the following journals received during the

period from the middle of September 2004 to the end of

December 2004:

* Biometrical Journal, volume 46, part 5.* Biometrika, volume 91, part 3.* Biostatistics, volume 5, part 4.* Clinical Trials, volume 1, part 5.* Communications in Statistics – Simulation and Computation,

volume 33, part 3.* Communications in Statistics – Theory and Practice, volume

33, parts 7–10.* Computational Statistics & Data Analysis, volume 47, parts

1–4.* Drug Information Journal, volume 38, part 4.* Journal of Biopharmaceutical Statistics, volume 14, parts 3, 4.* Journal of the American Statistical Association, volume 99,

part 3.* Journal of the Royal Statistical Society, Series A, volume

167, part 4.* Statistics in Medicine, volume 23, parts 20–24.* Statistical Methods in Medical Research, volume 13,

parts 5, 6.* The American Statistician, volume 58, part 4.

SELECTED HIGHLIGHTS FROM THE

LITERATURE

The themes of Statistical Methods in Medical Research were:

* Part 5: Cluster analysis (pages 343–408).* Part 6: Developing and comparing population models for

early detection of cancer (pages 419–538).

Part 24 of Statistics in Medicine contains papers from the

joint meeting of the International Society for Clinical Biosta-

tistics and the Society for Clinical Trials, held in London in July

2003. Two papers are worthy of mention here. The first is by

Garcia et al. who have reviewed the literature on published

cross-over trials and consider both the relative efficiency of a

cross-over compared to a parallel group design and the

reporting standards of cross-over trials compared to the

CONSORT statement. The second paper is by Stephen Senn

– although nothing to do with cross-over studies. This was the

President’s invited keynote lecture. It is full of insight, history

and general interest:

* Garcia R, Benet M, Arnau C, Cobo E. Efficiency of the

cross-over design: an empirical investigation. Statistics in

Medicine 2004; 23:3773–3780.* Senn S. Added values. Controversies concerning randomi-

zation and additivity in clinical trials. Statistics in Medicine

2004; 23:3729–3753.

Phase I

The paper by Whitehead et al. evaluates Bayesian procedures

for dose escalation studies with two binary responses. In a

comprehensive simulation study, various designs are considered

differing by prior opinion, criterion for optimal allocation of

doses, policy for early stopping, and dose schedule. The results

lead to general recommendations for the design of such studies.

* Whitehead J, Zhou Y, Stevens J, Blakey G. An evaluation

of a Bayesian method of dose escalation based on bivariate

binary responses. Journal of Biopharmaceutical Statistics

2004; 14:969–983.

Multiplicity

The following two papers consider active control trials where

the goal is to show non-inferiority or equivalence for multiple

endpoints. Kong et al. derive a test to demonstrate non-

inferiority or equivalence on each component of the endpoint

vector and conduct simulation studies to investigate its

performance. Tamhane and Logan address a slightly different

Copyright # 2005 John Wiley & Sons, Ltd.Received \60\re /teci

yE-mail: [email protected]

*Correspondence to: Simon Day, Medicines and Healthcareproducts Regulatory Agency, Room 13-205, Market Towers,1 Nine Elms Lane, London SW8 5NQ, UK.

Page 2: Literature review September–December 2004

test problem, namely to show that the experimental treatment is

not inferior on any of the endpoints and superior on at least one

(or some specified number of) endpoint(s). They propose a new

test and compare it with existing methods with respect to Type I

error control and power performance. The new test is especially

convenient if, after rejection of the global null hypothesis,

follow-up decisions on the single hypotheses concerning the

endpoints shall be made, which is usually the case.

* Kong L, Kohberger RC, Koch GG. Type I error and power

in noninferiority/equivalence trials with correlated multiple

endpoints: an example from vaccine development trials.

Journal of Biopharmaceutical Statistics 2004; 14:893–907.* Tamhane AC, Logan BR. A superiority-equivalence

approach to one-sided tests on multiple endpoints in clinical

trials. Biometrika 2004; 91:715–727.

Sample size calculation and recalculation

There is often debate on whether or not ‘responder analyses’ are

an adequate method to address the question of clinical

relevance. Stephen Senn and John Lewis exchanged the

arguments for and against in recent issues of this journal.

However, no matter how one answers this question, such

additional analyses are often required. To guard against any

unpleasant surprises in the analysis, it is essential that the study

has sufficient power not only for the significance test based on

the primary variable but also for the fulfilment of the responder

criterion defined for the dichotomized variable. The following

paper presents sample size calculation methods for this

situation and the cases of one or multiple endpoint(s) which

may be normally distributed or not.

* Kieser M, R .oohmel J, Friede T. Power and sample size

determination when assessing the clinical relevance of trial

results by ‘responder analysis’. Statistics in Medicine 2004;

23:3287–3305.

Ruvuna addresses the potential loss in power that may occur

in multicentre trials due to unequal centre sizes. For the

unweighted (Type III) analysis, a measure of imbalance is

derived. Employing this coefficient in the sample size calcula-

tion prevents an underestimation of the required number of

subjects in the case of extreme centre size imbalances.

* Ruvuna F. Unequal center sizes, sample size, and power in

multicenter clinical trials. Drug Information Journal 2004;

38:387–394.

The following paper could mark the beginning of a

wonderful friendship between statisticians and data managers.

Based on statistical considerations, sample size formulae are

derived that determine the number of items to be checked

during a database audit to ensure that the original data are

converted with acceptable quality. Simple random sampling

(item selection without any restriction) and cluster sampling (all

items from selected case record form books) are discussed, and

sample size tables and SAS programs, respectively, are given.

Implementation will obviate the need for data managers to

carry out a 100% audit of the database and will provide high-

quality data for the statisticians’ analyses – a classic win–win

situation!

* Zhang P. Statistical issues in clinical trial data audit. Drug

Information Journal 2004; 38:371–386.

Study design

Raab et al. consider timing (and frequency) of follow-up

observations for estimating median and mean survival times

when interval censoring is taking place. The focus is on equally

spaced (in time) observation points. If one of the observation

times is planned around the anticipated median, this may be

optimal. If the total follow-up time is kept constant but the

number of sampling times is changed, and the new sampling

times remain equally spaced, then the one near the anticipated

median will move and efficiency of the estimated median may

be reduced. This even happens if the number of sampling times

increases.

* Raab GM, Davies JA, Salter AB. Designing follow-up

intervals. Statistics in Medicine 2004; 23:3125–3137.

Data analysis issues

Fedorov et al. investigate the performance of three fixed effect

estimates of the treatment difference in multicentre trials under

random enrolment. In their simulation study, five different

enrolment schemes are considered that mimic a wide spectrum

of situations encountered in practice. A comforting result of

their work is that the estimator derived from the simplest model

works well (and even better than the more complex ones) in

many situations.

* Fedorov V, Jones B, Jones M, Zhigljavsky A. Estimation of

the treatment difference in multicenter trials. Journal of

Biopharmaceutical Statistics 2004; 14:1037–1063.

Darken reviews methods to evaluate the validity of least

squares analysis with the general linear model. Furthermore,

robustness with respect to these assumptions is investigated in a

simulation study and regulatory issues are discussed.

* Darken PF. Evaluating assumptions for least squares

analysis using the general linear model: a guide for the

pharmaceutical industry statistician. Journal of Biopharma-

ceutical Statistics 2004; 14:803–816.

The combination of p-values from independent tests is,

among others, an issue in meta-analysis, multiple testing and

group sequential adaptive designs. Loughin compares six such

methods with respect to their rejection region and power. The

Copyright # 2005 John Wiley & Sons, Ltd. Pharmaceut. Statist. 2005; 4: 77–79

LITERATURE REVIEW78

Page 3: Literature review September–December 2004

results are summarized in recommendations concerning which

method to use in which situation.

* Loughin TM. A systematic comparison of methods for

combining p-values from independent tests. Computational

Statistics & Data Analysis 2004; 47:467–485.

When using random effects in modelling, the distribution of

the effects is often chosen for computational convenience.

Agresti et al. show three cases (random effects model for

proportions and log-odds ratio and frailty model for survival)

in which misspecification of random effects distributions can

reduce the efficiency considerably. Proposals are reviewed that

describe how to address misspecification issues.

* Agresti A, Caffo B, Ohman-Strickland P. Examples in

which misspecification of a random effects distribution

reduces efficiency, and possible remedies. Computational

Statistics & Data Analysis 2004; 47:639–653.

Paul Meier et al. examine the performance of the Kaplan–

Meier survival estimator relative to parametric models. As

reflected in the article’s title, the results indicate that the price to

be paid when using the nonparametric approach is generally

small, if any.

* Meier P, Karrison T, Chappell R, Xie H. The price of

Kaplan–Meier. Journal of the American Statistical Associa-

tion 2004; 99:890–896.

Lin and Wang propose a new test for the comparison of two

survival curves. This test has greater power than the commonly

used log-rank test and the Wilcoxon test if the two groups do

not have proportional hazard ratios but when the curves are

close at the beginning and then separate.

* Lin X, Wang H. A new testing approach for comparing the

overall homogeneity of survival curves. Biometrical Journal

2004; 46:489–496.

In contrast to asymptotic tests, unconditional exact tests

comparing two independent proportions never exceed the

preassigned nominal level. Furthermore, these tests are

generally more powerful than conditional exact tests. Skipka

et al. compare various unconditional exact tests for proving

non-inferiority or superiority with respect to power, size and

computational time. The SAS code for all tests can be obtained

from the authors upon request, which may further stimulate the

application of these tests in two-armed trials with binary

outcomes.

* Skipka G, Munk A, Freitag G. Unconditional exact tests

for the difference of binomial probabilities – contrasted and

compared. Computational Statistics & Data Analysis 2004;

47:757–773.

Should the full analysis set contain all randomized subjects?

Eriwan Radio would say: ‘In principle, yes’. However, as we

know, decisions are often difficult in actual trials. Stewart gives

an overview of problems surrounding intention-to-treat and per

protocol analysis and proposes a general approach when

selecting the full analysis set.

* Stewart WH. Basing intention-to-treat on cause and effect

criteria. Drug Information Journal 2004; 38:361–369.

Miscellaneous

The American Statistician contains some interesting statistical

computing software reviews covering WinBUGS, R, and

software for analysing correlated survival data.

* Cowles MK. Review of WinBUGS 1.4. The American

Statistician 2004; 58:330–336.* Kelly PJ. A review of software packages for analyzing

correlated survival data. The American Statistician 2004;

58:337–342.* Horton NJ, Brown ER, Qian L. Use of R as a toolbox for

mathematical statistics exploration. The American Statisti-

cian 2004; 58:343–357.

The genomics revolution radiates more and more in the

world of biostatisticians. The Journal of Biopharmaceutical

Statistics contains 10 papers dealing with the analysis

of microarray data. Authors from a regulatory agency,

university and industry highlight the areas of normaliza-

tion, gene identification, and data integration from their

perspective.

* Chen JJ. Guest editorial: Microarrays in pharmacoge-

nomics. Journal of Biopharmaceutical Statistics

2004; 14:535–537 (10 articles on this topic on pages

539–721).

A personal perspective on trials in the genomic area is given

by Richard Simon. The title is a bit of a pun, supposedly being

an agenda for the journal Clinical Trials, but it is obviously as

much an agenda for clinical trials – do companies, regulators

and patients want wide inclusion criteria and broad applic-

ability, or narrow inclusion criteria and a narrow market? It is

not a simple choice.

* Simon RM. An agenda for Clinical Trials: clinical trials in

the genomic era. Clinical Trials 2004; 1:468–470.

LITERATURE REVIEW 79

Copyright # 2005 John Wiley & Sons, Ltd. Pharmaceut. Statist. 2005; 4: 77–79