statisticians and evidence – mote and beam

3
PHARMACEUTICAL STATISTICS Pharmaceut. Statist. 2008; 7: 155–157 Published online 31 March 2008 in Wiley InterScience (www.interscience.wiley.com) DOI: 10.1002/pst.325 Statisticians and evidence – mote and beam Stephen Senn* ,y Department of Statistics, University of Glasgow, Glasgow, UK A recent exchange of letters in Statistics in Medicine reminded me of a way that we pharma- ceutical statisticians are failing. The correspon- dence was on the relationship (if any) between the size of random effect variances and treatment effects in meta-analysis [1, 2]. I argued that it was plausible to believe that there should be some sort of correlation between the two: large random effect variances were implausible if treatments were hardly effective. Others were not convinced. However, a dram of data is worth a pint of pontification and it occurred to me that empirical investigations about the relationship between the two might answer the question. Where would you get such data? From the ‘Cochrane Collaboration’ is the answer, and this sets me thinking about something that Archie Cochrane wrote about the medical profession [3]: It is surely a great criticism of our profession that we have not organized a critical summary, by specialty or subspecialty, adapted periodically, of all relevant randomized controlled trials. Well, thanks to the Cochrane Collaboration and the evidence-based medicine movement that criti- cism is well on the way to being answered but in his 1996 presidential address to the Royal Statis- tical Society, Adrian Smith wrote [4]: ...but what is so special about medicine.... Obvious topics include education – what does work in the classroom? – and penal policy – what is effective in preventing re-offending? These are excellent suggestions. We all agree, I am sure that educators, politicians, the police and even the judiciary should pay attention to evidence when carrying out their duties. However, one profession is missing from the list – statisticians. How should we, as statisticians, be using evidence? Here, I mean not just helping amass evidence to enable others to do their job better but obtaining evidence to help us do our jobs better. I often say that statisticians will apply statistics to anything and everything except what they do themselves. How else can one explain the fre- quently amateurish way that we design our simulations and report the results without even quoting standard errors? Or how else can one account for the fact that the two-stage approach to analysing cross-over trials survived so long? It involved screening for a ‘disease’, carry-over, whose prevalence was unknown (but probably rare) using a test of poor sensitivity and moderate specificity backed up by a very poor treatment [5]. What medic’s screening procedure would survive interrogation in the local statistics clinic with such appalling properties? I recently chaired a working party on first-in- man studies for the Royal Statistical Society [6], with official representation from PSI. We identified as a problem that data on side-effects in first-in- man studies were not routinely collected and suggested that drug regulatory agencies take a lead. I wonder, however, how many working in the pharmaceutical industry would be able to answer y E-mail: [email protected] *Correspondence to: Stephen Senn, Department of Statistics, University of Glasgow, Glasgow, UK. Copyright # 2008 John Wiley & Sons, Ltd.

Upload: stephen-senn

Post on 06-Jul-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

PHARMACEUTICAL STATISTICS

Pharmaceut. Statist. 2008; 7: 155–157

Published online 31 March 2008 in Wiley InterScience

(www.interscience.wiley.com) DOI: 10.1002/pst.325

Statisticians and evidence – mote and

beam

Stephen Senn*,y

Department of Statistics, University of Glasgow, Glasgow, UK

A recent exchange of letters in Statistics inMedicine reminded me of a way that we pharma-ceutical statisticians are failing. The correspon-dence was on the relationship (if any) between thesize of random effect variances and treatmenteffects in meta-analysis [1, 2]. I argued that it wasplausible to believe that there should be some sortof correlation between the two: large randomeffect variances were implausible if treatmentswere hardly effective. Others were not convinced.However, a dram of data is worth a pint ofpontification and it occurred to me that empiricalinvestigations about the relationship between thetwo might answer the question.

Where would you get such data? From the‘Cochrane Collaboration’ is the answer, and thissets me thinking about something that ArchieCochrane wrote about the medical profession [3]:

It is surely a great criticism of our profession that wehave not organized a critical summary, by specialtyor subspecialty, adapted periodically, of all relevantrandomized controlled trials.

Well, thanks to the Cochrane Collaboration andthe evidence-based medicine movement that criti-cism is well on the way to being answered but inhis 1996 presidential address to the Royal Statis-tical Society, Adrian Smith wrote [4]:

...but what is so special about medicine. . .. Obvioustopics include education – what does work in the

classroom? – and penal policy – what is effective inpreventing re-offending?

These are excellent suggestions. We all agree, I amsure that educators, politicians, the police and eventhe judiciary should pay attention to evidencewhen carrying out their duties. However, oneprofession is missing from the list – statisticians.How should we, as statisticians, be using evidence?Here, I mean not just helping amass evidence toenable others to do their job better but obtainingevidence to help us do our jobs better.

I often say that statisticians will apply statisticsto anything and everything except what they dothemselves. How else can one explain the fre-quently amateurish way that we design oursimulations and report the results without evenquoting standard errors? Or how else can oneaccount for the fact that the two-stage approach toanalysing cross-over trials survived so long? Itinvolved screening for a ‘disease’, carry-over,whose prevalence was unknown (but probablyrare) using a test of poor sensitivity and moderatespecificity backed up by a very poor treatment [5].What medic’s screening procedure would surviveinterrogation in the local statistics clinic with suchappalling properties?

I recently chaired a working party on first-in-man studies for the Royal Statistical Society [6],with official representation from PSI. We identifiedas a problem that data on side-effects in first-in-man studies were not routinely collected andsuggested that drug regulatory agencies take alead. I wonder, however, how many working in thepharmaceutical industry would be able to answeryE-mail: [email protected]

*Correspondence to: Stephen Senn, Department of Statistics,University of Glasgow, Glasgow, UK.

Copyright # 2008 John Wiley & Sons, Ltd.

‘yes’ if the regulator turned round and ask, ‘wellare you collecting, storing and managing your datasensibly?’.

You might think that it is obvious that theanswer is ‘yes’. How would one get drugsregistered except by running lots of trials, collect-ing the data and submitting massive dossiers?However, consider the following. In planning anew trial, how easy is it for the statistician tointerrogate the company’s database and to getstandard deviations on key outcomes for previoustrials, perhaps with different treatments but in thesame indication? How easy is it to obtain thepredictive value of various covariates? Where isthe information stored as to what analyses workwell in this indication with these data? How easy itis for you to demonstrate empirically the value ofretaining original measurements rather than di-chotomizing? Can you work out how importantcentre effects are? What do you know aboutrecruitment times and can you get the data easily?

I consult for a number of pharmaceuticalcompanies and I am often asked what I thinkthey could be doing better. My first suggestion isalways, ‘set up a planning database and startstoring results from trials you have run’.

Of course, I am not claiming that statisticiansare never involved in these forms of investigations.For example, to return to the subject of cross-overtrials, some years ago, some statisticians workingin bioequivalence had a radical approach toinvestigating carry-over [7]. They didn’t just relyon theory and they certainly didn’t use simulation,they used data. They calculated the P-values forcarry-over in a series of 324 AB/BA cross-overtrials and found a distribution that was close touniform, a situation consistent with the positionthat carry-over doesn’t occur. (Subsequently I wasfortunate enough to be able to collaborate withtwo of the authors in a paper in this journal [8],but, I had no part in the original brilliant idea ofusing data.)

Or, to return to where this note started,Statistics in Medicine has published at least threepapers providing useful empirical investigations ofmeta-analysis [9–11]. I am also well aware thatstatisticians working in the pharmaceutical indus-

try will use standard deviations from previoustrials to help plan current ones. However, despitethis, I still think we could do better. For example,we have become increasingly aware that standardsample-size calculations rely on assuming that thetrue population standard deviation is known. Thefact that it is subject to random variation, becauseestimated using limited degrees of freedom, can beallowed for [12] but how easy is it for thepharmaceutical statistician to check whether thevariation from trial to trial shows ‘more thanrandom’ heterogeneity?

We are supposed to be entering the era ofBayesian statistics. One of the often repeatedadvantages of the Bayesian approach is that itcan take account of all sources of information. Itseems to me that it is about time that we as aprofession started doing something about collect-ing this information. If we don’t, a dreadful fatemay await us and the future may be with the data-miners. Hence, I would like to pose a question toour profession, but specifically those in managerialpositions. ‘What are you doing to make sure thatdata are being collected to help statisticians dotheir jobs better?’

ACKNOWLEDGEMENTS

I thank the referees for their helpful comments. I bearfull responsibility for the views expressed.

REFERENCES

1. Lambert PC, Sutton AJ, Burton PR, Abrams KR,Jones DR. Comments on ‘Trying to be preciseabout vagueness’ by Stephen Senn. Statistics inMedicine 2007; 26:1417–1430.

2. Senn S. Authors’ reply. Statistics in Medicine 2007;DOI: 10.1002/sim.3067.

3. Cochrane AL. 1931–1971: a critical review, withparticular reference to the medical profession.Medicine for the Year 2000. Office of HealthEconomics: London, 1979; 1–11.

4. Smith AFM. Mad cows and ecstasy: chance andchoice in an evidence based society. Journal of theRoyal Statistical Society Series A – Statistics inSociety 1996; 159:367–383.

Copyright # 2008 John Wiley & Sons, Ltd. Pharmaceut. Statist. 2008; 7: 155–157DOI: 10.1002/pst

156 S. Senn

5. Senn SJ. Cross-over trials in Statistics in Medicine:the first ‘25’ years. Statistics in Medicine 2006;25:3430–3442.

6. Working Party on Statistical Issues in First-in-ManStudies. Statistical issues in first-in-man studies.Journal of the Royal Statistical Society, Series A2007; 170:517–579.

7. D’Angelo G, Potvin D, Turgeon J. Carryover effectsin bioequivalence studies. Journal of Biopharmaceu-tical Statistics 2001; 11:27–36.

8. Senn SJ, D’Angelo G, Potvin D. Carry-over incross-over trials in bioequivalence: theoretical con-cerns and empirical evidence. Pharmaceutical Sta-tistics 2004; 3:133–142.

9. Schmid CH, Lau J, McIntosh MW, Cappelleri JC.An empirical study of the effect of the control rate as

a predictor of treatment efficacy in meta-analysis

of clinical trials. Statistics in Medicine 1998; 17:1923–1942.

10. Deeks JJ. Issues in the selection of a summary

statistic for meta-analysis of clinical trials with

binary outcomes. Statistics in Medicine 2002;21:1575–1600.

11. Engels EA, Schmid CH, Terrin N, Olkin I,

Lau J. Heterogeneity and statistical significancein meta-analysis: an empirical study of 125

meta-analyses. Statistics in Medicine 2000; 19:

1707–1728.12. Julious SA. Designing clinical trials with uncertain

estimates of variability. Ph.D. thesis, University

College London: London, 2006.

Statisticians and evidence – mote and beam 157

Copyright # 2008 John Wiley & Sons, Ltd. Pharmaceut. Statist. 2008; 7: 155–157DOI: 10.1002/pst