knowledge translation consensus conference: research methods

5
Knowledge Translation Consensus Conference: Research Methods Scott Compton, PhD, Eddy Lang, MD, Thomas M. Richardson, PhD, MBA, Erik Hess, MD, Jeffrey Green, MD, William Meurer, MD, Rachel Stanley, MD, MHSA, Robert Dunne, MD, Shannon Scott-Findlay, RN, PhD, Rahul K. Khare, MD, Jeremy Grimshaw, MD, PhD Abstract The authors facilitated a workshop session during the 2007 Academic Emergency Medicine Consensus Conference to address the specific research methodologies most suitable for studies investigating the ef- fectiveness of knowledge translation interventions. Breakout session discussions, recommendations, and examples in emergency medicine findings are presented. ACADEMIC EMERGENCY MEDICINE 2007; 14:991–995 ª 2007 by the Society for Academic Emergency Medicine Keywords: knowledge translation, consensus proceedings T he transfer or translation of knowledge derived from well-designed, quality studies has long been a challenge. Understanding how and why knowledge is transferred is vital to getting high-level ev- idence to the patients in the emergency department. However, the specific study of interventions designed to facilitate knowledge translation (KT) has not been a main focal point of emergency medicine (EM) researchers in the past. Designing a study of knowledge transfer is not within the experience and training of most EM investiga- tors and deserves special consideration. A variety of study designs can be used to evaluate KT interventions, such as guideline implementation strate- gies. For this reason, the Society for Academic Emer- gency Medicine’s peer-reviewed journal, Academic Emergency Medicine, sponsored a consensus conference to address KT in EM, specifically establishing a research agenda and guide map for evidence uptake. This oc- curred on May 15, 2007, in Chicago, IL. This workshop session focused on the evaluation of methodologies in KT research and, in particular, which methodologies should be used to identify the most effective strategies for bridging the research-to-practice gap in EM. Before the consensus conference, an Internet-based communi- cation tool (Google Group) was used to discuss potential content and set the agenda for this breakout session. The authors then facilitated a workshop session during this conference to address the scientific methods available to investigate KT. Breakout session discussions, recom- mendations, and examples in EM findings are discussed in the following sections. OBSERVATIONAL (DESCRIPTIVE) STUDIES The major study designs used in observational studies are case-control, cross-sectional, and cohort. 1 Observa- tional studies have a variety of strengths. They are typi- cally less costly than experimental study designs, can generate hypotheses for further testing, and provide in- sight into the feasibility and design of randomized con- trolled trials (RCTs). 2 Observational studies are useful when logistical and ethical constraints make conducting more rigorous investigations difficult or infeasible. They From the Department of Emergency Medicine, Wayne State University (SC), Detroit, MI; Emergency Department, Sir Mor- timer B. Davis Jewish General Hospital, McGill University (EL), Montreal, Quebec, Canada; Department of Emergency Medi- cine, University of Rochester Medical Center (TMR), Rochester, NY; Department of Emergency Medicine, University of Ottawa (EH), Ottawa, Ontario, Canada; Department of Emergency Med- icine, New York Hospital Queens (JG), Flushing, NY; Department of Emergency Medicine (RS) and Departments of Emergency Medicine and Neurology (WM), St. Joseph Mercy Hospital (RD), University of Michigan, Ann Arbor, MI; Department of Pe- diatrics & Centre for Health Promotion Studies, University of Alberta (SS-F), Edmonton, Alberta, Canada; Department of Emergency Medicine, Feinberg School of Medicine, Northwest- ern University (RKK), Chicago, IL; and Ottawa Health Research Institute (JG), Ottawa, Ontario, Canada. Received June 7, 2007; revision received June 21, 2007; accepted June 21, 2007. This is a proceeding from a workshop session of the 2007 Aca- demic Emergency Medicine Consensus Conference, ‘‘Knowledge Translation in Emergency Medicine: Establishing a Research Agenda and Guide Map for Evidence Uptake,’’ Chicago, IL, May 15, 2007. Contact for correspondence and reprints: Scott Compton, PhD; e-mail: [email protected]. ª 2007 by the Society for Academic Emergency Medicine ISSN 1069-6563 doi: 10.1197/j.aem.2007.06.031 PII ISSN 1069-6563583 991

Upload: scott-compton

Post on 20-Jul-2016

216 views

Category:

Documents


1 download

TRANSCRIPT

Knowledge Translation ConsensusConference: Research MethodsScott Compton, PhD, Eddy Lang, MD, Thomas M. Richardson, PhD, MBA, Erik Hess, MD, Jeffrey Green, MD,William Meurer, MD, Rachel Stanley, MD, MHSA, Robert Dunne, MD, Shannon Scott-Findlay, RN, PhD,Rahul K. Khare, MD, Jeremy Grimshaw, MD, PhD

AbstractThe authors facilitated a workshop session during the 2007 Academic Emergency Medicine ConsensusConference to address the specific research methodologies most suitable for studies investigating the ef-fectiveness of knowledge translation interventions. Breakout session discussions, recommendations, andexamples in emergency medicine findings are presented.

ACADEMIC EMERGENCY MEDICINE 2007; 14:991–995 ª 2007 by the Society for Academic EmergencyMedicine

Keywords: knowledge translation, consensus proceedings

The transfer or translation of knowledge derivedfrom well-designed, quality studies has longbeen a challenge. Understanding how and why

knowledge is transferred is vital to getting high-level ev-idence to the patients in the emergency department.However, the specific study of interventions designedto facilitate knowledge translation (KT) has not been amain focal point of emergency medicine (EM) researchers

From the Department of Emergency Medicine, Wayne State

University (SC), Detroit, MI; Emergency Department, Sir Mor-

timer B. Davis Jewish General Hospital, McGill University (EL),

Montreal, Quebec, Canada; Department of Emergency Medi-

cine, University of Rochester Medical Center (TMR), Rochester,

NY; Department of Emergency Medicine, University of Ottawa

(EH), Ottawa, Ontario, Canada; Department of Emergency Med-

icine, New York Hospital Queens (JG), Flushing, NY; Department

of Emergency Medicine (RS) and Departments of Emergency

Medicine and Neurology (WM), St. Joseph Mercy Hospital

(RD), University of Michigan, Ann Arbor, MI; Department of Pe-

diatrics & Centre for Health Promotion Studies, University of

Alberta (SS-F), Edmonton, Alberta, Canada; Department of

Emergency Medicine, Feinberg School of Medicine, Northwest-

ern University (RKK), Chicago, IL; and Ottawa Health Research

Institute (JG), Ottawa, Ontario, Canada.

Received June 7, 2007; revision received June 21, 2007; accepted

June 21, 2007.

This is a proceeding from a workshop session of the 2007 Aca-

demic Emergency Medicine Consensus Conference, ‘‘Knowledge

Translation in Emergency Medicine: Establishing a Research

Agenda and Guide Map for Evidence Uptake,’’ Chicago, IL, May

15, 2007.

Contact for correspondence and reprints: Scott Compton, PhD;

e-mail: [email protected].

ª 2007 by the Society for Academic Emergency Medicine

doi: 10.1197/j.aem.2007.06.031

in the past. Designing a study of knowledge transfer is notwithin the experience and training of most EM investiga-tors and deserves special consideration.

A variety of study designs can be used to evaluate KTinterventions, such as guideline implementation strate-gies. For this reason, the Society for Academic Emer-gency Medicine’s peer-reviewed journal, AcademicEmergency Medicine, sponsored a consensus conferenceto address KT in EM, specifically establishing a researchagenda and guide map for evidence uptake. This oc-curred on May 15, 2007, in Chicago, IL. This workshopsession focused on the evaluation of methodologies inKT research and, in particular, which methodologiesshould be used to identify the most effective strategiesfor bridging the research-to-practice gap in EM. Beforethe consensus conference, an Internet-based communi-cation tool (Google Group) was used to discuss potentialcontent and set the agenda for this breakout session. Theauthors then facilitated a workshop session during thisconference to address the scientific methods availableto investigate KT. Breakout session discussions, recom-mendations, and examples in EM findings are discussedin the following sections.

OBSERVATIONAL (DESCRIPTIVE) STUDIES

The major study designs used in observational studiesare case-control, cross-sectional, and cohort.1 Observa-tional studies have a variety of strengths. They are typi-cally less costly than experimental study designs, cangenerate hypotheses for further testing, and provide in-sight into the feasibility and design of randomized con-trolled trials (RCTs).2 Observational studies are usefulwhen logistical and ethical constraints make conductingmore rigorous investigations difficult or infeasible. They

ISSN 1069-6563

PII ISSN 1069-6563583 991

992 Compton et al. � KT RESEARCH METHODOLOGY

are especially useful when RCTs are not available and of-tentimes can help to demonstrate the ‘‘real world’’ effec-tiveness of therapies outside the highly controlled settingof an RCT.3 In the context of KT, observational studiesmay be useful in providing additional insight into theprocess of behavioral change.2 Moreover, observationaldata are often needed to establish baseline rates of ad-herence to evidence-based practices and the need forKT to narrow the research-to-practice gap.

Although associations measured in observational stud-ies may reflect causal relationships, their magnitude anddirection may be influenced by bias, confounding, andchance.1 Selection bias is one of the most important limi-tations. When patients and providers self-select treat-ments, the treatment and control groups may differ inways other than the interventions being compared, lead-ing to biased estimates of treatment effects.2 Statisticaltechniques such as analysis of covariance and propensityscore methods are often necessary to create an analysisthat resembles that which would have occurred had thepatients been randomized.3,4 Despite these methods, re-sidual confounding of the treatment effect often remains.5

RecommendationObservational research is foundational to establishingthe research-to-practice gap and the need for KT. It isuseful for hypothesis generation and establishing theneed for more rigorous evaluations. Selection bias andconfounding associated with observational study designsare substantial methodological limitations. Investigatorsconducting observational research for KT should be cau-tious when conducting statistical analyses and ensurethat data on the most important patient characteristicsare collected. When feasible, more rigorous study de-signs should be used.

QUASI-EXPERIMENTAL STUDY DESIGNS

Opportunities often arise within individual organizationsto evaluate interventions of evidence-based improve-ment strategies before and following implementation.The evaluation of these strategies is tempting for at leasttwo reasons: 1) to assess whether the strategy is havingthe intended effect locally and 2) to assess factors thatare associated with the intended effect that may general-ize externally to a broad variety of locations and environ-ments. The development of a methodological approach,therefore, should serve both purposes equally well. Un-fortunately, though, this pretest/posttest interventionstyle does not necessarily include random selection or as-signment of participants and has, therefore, historicallybeen referred to as ‘‘quasi-experimental.’’6 Althoughthere are several logistical features that make this designattractive, there are significant limitations to considerwhen interpreting results of this methodology.

This study design does not control important factorsunrelated to the intervention itself, such as events thattake place simultaneously with the intervention or at leastbetween premeasure and postmeasure, or for the growth(maturity) of the members between measures. In addi-tion, this design is particularly susceptible to the Haw-thorne effect, which increases the apparent effect of anintervention due to individuals who change their behav-

ior because they know they are being observed. For ex-ample, efforts to improve care of a given disease entity,through a specific intervention, are often presented ina pre/post format. However, most of the programs aredesigned to implement guidelines that are being promul-gated by a variety of sources. The results of these studiesare inherently difficult to interpret, which calls into ques-tion the generalizability to other settings.7

There are several approaches possible to improveupon the single-group pretest/posttest study design.First, when it is impossible to randomize an interven-tion or to identify an appropriate comparison group, amethod to improve the interpretability of the effect ofthe intervention is to obtain several preintervention mea-surements and an equivalent number of postinterventionmeasurements. This study design is referred to as an in-terrupted time series design. Essentially, the value addedby these additional measurements provides the ability toidentify trends in the outcome measure that would go un-noticed in a single pretest/posttest design. For example,there is an apparent increase in the outcome measurefrom the pre to post measurements (Figure 1); however,unbeknownst to the researcher, the change is merelypart of a trend, which is represented by the dotted line.A time series study would allow for detection of thistrend.

A second approach to improve the interpretability ofthe pretest/posttest study design for evidence of KT isto compare the results of a group that received the inter-vention with those of a group that did not receive the in-tervention. This design is referred to as a ‘‘nonequivalentpre-post design’’ and can provide important interpreta-tions to the effect of the intervention. However, that isnot to say that there are not considerable problemswith the design. The ability to obtain an adequate controlgroup similar to the intervention group is often problem-atic. This dissimilarity between groups is often undetect-able, and these differences could potentially impact theeffectiveness of the intervention differently. An impor-tant component of both time series and untreated controlgroup designs to increase validity is to blind the outcomeassessment to the period of study.

RecommendationSingle pretest/posttest study designs have many limita-tions and should be avoided. To improve the interpreta-tion of results obtained from single-group studies of KTinterventions, it is preferable to include several pre and

Figure 1. Illustration of interpretation hazards for single

pretest/posttest study designs. The dotted line represents

an unmeasured secular trend that could be detected in a

time series design.

ACAD EMERG MED � November 2007, Vol. 14, No. 11 � www.aemj.org 993

post measures of the outcome in a time series designfashion. Studies of nonrandomized comparison groupsare inherently limited due to the lack of confidence thatone can place in baseline comparison of groups; how-ever, they are an improvement from single pretest/post-test study designs. Efforts should be made to determineif competing sources of knowledge were similar in allgroups.

Examples and Opportunities in EMDue to the increasing quantity of guideline adherence ini-tiatives and other KT issues, there will undoubtedly bemany opportunities for the evaluation of the effective-ness of KT interventions in a pre/post fashion. For exam-ple, a pre/post study was conducted at two hospitals todetermine whether implementing the Agency for HealthCare Policy and Research’s unstable angina practiceguideline improves emergency physicians’ decision-making in patients with symptoms of possible acute cor-onary syndrome, including those for whom the diagnosisof unstable angina is uncertain.8 In this study, the inves-tigators blinded the physician reviewers who assignedthe cardiac outcomes and also performed serial measure-ments for three months before the intervention and sixmonths after the intervention.

RANDOMIZED TRIALS

Patient RCTs are appropriately considered the ideal ex-perimental design for evaluating the effect of medical in-terventions. They are able to control for both known andunknown bias inherent in nonrandomized studies.9 How-ever, patient randomized trials present certain problemswhen applied to KT research. Providers required by astudy protocol to use different resources for patientcare based on which group the patient is randomizedto are likely to use some or all of the experimental proto-col for care of the control group. This bias, called con-tamination, may minimize the actual effect of theexperimental intervention.2

One solution to the problem of contamination is to usea cluster randomized strategy.10,11 This approach in-volves randomizing groups or clusters of providers todiffering KT interventions. Randomization of the inter-vention occurs at the group level, while patients withingroups receive the same or similar care. If randomizationis performed at the hospital level, the concern for con-tamination can be minimized. All the patients at a givenhospital in a study receive the same KT intervention, min-imizing the chance for contamination from providers us-ing the experimental intervention on the control group.

There are several areas of concern with cluster random-ized trials. First, there is a significant impact on the statisti-cal power of the study. Depending on the variance amongclusters, sample size to detect a significant effect can bemore than double that of a patient randomized study.12 Incluster randomized trials, sample size needs to be cor-rected for the design effect relating to the number of clus-ters, the average number of patients per cluster, and thestatistical extent of clustering (intraclass correlation). Ingeneral, increasing the number of clusters has greater im-pact on power than increasing the number of patients percluster.13,14 In addition, factorial-designed cluster random-

ized studies allow for assigning different clusters to morethan one intervention, so that clusters will receive eitherno intervention, intervention A, intervention B, or inter-vention A and B. This allows for comparison of morethan one intervention at a time without significantly in-creasing the sample size.9 Analyses of cluster randomizedstudies also need to account for the fact that patients withinclusters can no longer be considered independent of oneanother, because patient management is similar withinclusters and dissimilar from patients in other clusters.15,16

Correcting for the Hawthorne effect (the nonspecificbeneficial effect of taking part in research) can be difficultin KT research. The balanced incomplete block designcan correct for this phenomenon.9 In this design, clustersreceive one of two interventions and then act as the con-trol for the other intervention, so that all clusters are tak-ing part in research.17 However, data analysis for thisstrategy can become more complicated.

Cluster randomized trials may also, but not necessarily,require a large capital investment and may not be feasiblein some settings. Research networks may be one solutionto this problem. With networks, institutions can combineresources, use standardized protocols, and share largeamounts of data. Reporting of randomized trials hasbeen variable and flawed in the past. Randomization strat-egies and enrollment criteria are often underreported inthe literature. Researchers conducting KT randomizedtrials should consider using Consolidated Standards ofReporting Trials (CONSORT) guidelines where appropri-ate to assist in quality reporting of results.18 Despite theirlimitations and challenges to completion, the cluster ran-domized design is generally considered the ideal approachto investigating the effect of a KT intervention.

RecommendationRandomized trials are a robust study design for KT inter-ventional research, with the ability to determine a causalrelationship between a KT intervention and a quality-of-care outcome. Cluster randomized trials, in particular,represent an ideal means to evaluate the effect of KT in-terventions. Methodological issues of ensuring adequatesample size must be addressed, but there remains nomore valid way to evaluate interventions on health carebehavior.

Examples and Opportunities in EMThere are numerous areas for potential KT research inEM using cluster randomized trials, particularly in thearea of guideline implementation research. Yealy et al.19

conducted a cluster randomized trial to evaluate threedifferent guideline implementation strategies to improvecompliance with community-acquired pneumonia prac-tice guidelines. Their multicenter study was randomizedat the hospital level to minimize the risk of contaminationfrom differing guideline implementation strategies. Theyrandomized hospitals to one of three guideline imple-mentation strategies designed to measure the effects ofinterventions of varying intensity. They were able todemonstrate a significant increase in appropriate outpa-tient treatment of low-risk patients with community-acquired pneumonia at hospitals randomized to receivea high-intensity guideline implementation interventionversus a low- or moderate-intensity intervention.

994 Compton et al. � KT RESEARCH METHODOLOGY

QUALITATIVE METHODS

Qualitative research is one of the two major approachesto research methodology in the social sciences. Qualita-tive research involves collecting, analyzing, and inter-preting data by observing what people do and say. Thisapproach to research focuses on the meanings, concepts,definitions, characteristics, metaphors, symbols, and de-scriptions of things. In general, qualitative researchgenerates rich, detailed descriptions that contribute toan in-depth understanding of the phenomena of interest.While quantitative research techniques are more suitedfor testing specific hypotheses and defining causal rela-tionships between specific interventions and outcomes,quantitative and qualitative research methods can com-plement each other because they generate different kindsof knowledge that are useful to clinical practice. Ques-tions that arise when trying to understand why gaps existbetween what we know from best research evidence andwhat we do in clinical practice as well as questions thatmay be relevant when trying to understand the successesand failures of an evidence implementation strategy maybenefit from alternative approaches, such as qualitativemethods. A debate of the differences between quantita-tive and qualitative research methods is beyond thescope of this article; however, the main feature that dis-tinguishes qualitative from quantitative research lies inthe nature of the data derived and the analytical processassociated with it.

The importance of qualitative research is reflected inthe fact that the practice of medicine is far more complexthan the purely scientific and mechanistic approaches topathologic disease states might suggest. Thus, thestrength of qualitative data is that it is rich and holisticwith strong potential for revealing complexity nested ina real context. There are important questions about thehuman experience and social interactions that influencemuch of medical care and that require interpretive orqualitative research techniques to understand well. Qual-itative research enables us to make sense of the worldaround us, to describe and explain the social work, andto develop explanatory models and theories.

Qualitative research can answer the questions of what,why, and how, around poorly understood or complex be-haviors in medicine. Qualitative research can lead to thedevelopment of theory, whereas quantitative researchtests extant theory. As it pertains specifically to KT inEM, qualitative research can advance our understandingof the barriers and facilitators to KT, as well as help todisentangle confounding effects in intervention studies.Furthermore, qualitative research can help us to under-stand why specific evidence-based KT interventionswere not successful. It is just as important to understandwhy an intervention did not work as it is to understandwhy an intervention was successful.

An important distinction is worth making between sur-vey-based research and qualitative research. While bothcan yield information on perceptions about barriers andfacilitators to evidence uptake, for instance, the contribu-tion of qualitative research comes in the rich descriptionof these factors. Approaches for the critical appraisal ofqualitative research have started to be published in theliterature; consequently, compelling debates are emerg-

ing in the literature as to the contribution of qualitativeresearch to systematic reviews, for instance.20

RecommendationQualitative research is an important technique thatshould be used for advancing KT in EM. Participant ob-servation, focus groups, and interview techniques areideal for data collection, while validated qualitative dataanalysis methods also need to be utilized. Specifically,qualitative research can enrich our understanding ofthe KT process and potentially lead to the developmentof KT interventions for EM contexts.

THEORETICALLY INFORMED KT RESEARCH

High-quality clinical and health services research is fre-quently informed by and conducted using structuralframeworks or theoretical models. A theory is ‘‘a coher-ent and non-contradictory set of statements, concepts orideas that organizes, predicts and explains phenomena,events, behavior, etc.’’21 Theoretical models are createdand used in exploring and testing hypotheses relatingto complex scientific and social problems. KT researchcan benefit from framing research based on theoryderived from a variety of disciplines.21

Closely related to KT research are numerous theoreti-cal models developed in an attempt to explain human be-havior at the individual level. For example, the theory ofplanned behavior22 has been used to explain behavioralchange, as has Rogers’ diffusion of innovation model20,23

to examine the process of dissemination, implementa-tion, and uptake of innovative models of care. A theoret-ical model may be implicit or explicit; explicit theorieswere mentioned previously. Implicit theories are per-sonal constructions about particular phenomena, andwhile they may be valid, they are more difficult to gener-alize and replicate. Explicit theories have the advantageof transparency, reproducibility, and generalizability.21

Knowledge translation research evaluates differentlevels in the process, such as individual-level behavior (pa-tient and provider behavior) and system-level behavior(community-based and health care systems). It also exam-ines at the micro level (person/family level or providerpractice level) and at a macro level (health and social pol-icy).21 Because KT research encompasses a broad rangeof observational studies and interventions across thesevarious levels, theoretical models from various disciplinesmay be used to frame the research. Thus, theoreticalmodels developed and used in other medical and nonmed-ical disciplines might be used in KT research depending onthe type of study or intervention tested. Use of theoreti-cally informed interventions can allow for a more thor-ough examination of the multifaceted components of aproblem or intervention. When used appropriately, eachcomponent of the model can be examined and evaluatedand lead to a greater understanding of the individualcomponents of a problem or intervention.

RecommendationThis panel recommends that KT intervention researchmethods should, when possible, explore, use, or test the-ory-driven models of behavior and behavioral changeto maximize the internal validity of the study and the

ACAD EMERG MED � November 2007, Vol. 14, No. 11 � www.aemj.org 995

reproducibility and generalizability of research findings.We further suggest that KT research include a multidisci-plinary approach to explore and test theoretical modelsfrom various disciplines to better understand the com-plex process of KT at various intervention levels andamong different target populations.

CONCLUSIONS

This article provides important recommendations to con-sider when developing studies to evaluate the effect ofKT interventions. The careful consideration of theserecommendations may help facilitate transcending theresearch-to-practice gap that currently exists in EM.

References

1. Jepsen P, Johnsen SP, Gillman MW, Sorensen HT. In-terpretation of observational studies. Heart. 2004; 90:956–60.

2. Grimshaw J, Campbell M, Eccles M, Steen N. Exper-imental and quasi-experimental designs for evaluat-ing guideline implementation strategies. Fam Pract.2000; 17(Suppl 1):S11–6.

3. D’Agostino RB Jr, D’Agostino RB Sr. Estimatingtreatment effects using observational data. JAMA.2007; 297:314–6.

4. Novikov I, Kalter-Leibovici O. Analytic approaches toobservational studies with treatment selection bias[letter]. JAMA. 2007; 297:2077.

5. Becher H. The concept of residual confounding in re-gression models and some applications. Stat Med.1992; 11:1747–58.

6. Cook TD, Campbell M. Quasi-Experimentation: De-sign & Analysis Issues for Field Settings. Boston,MA: Houghton-Mifflin, 1979.

7. Lichtman JH, Roumanis SA, Radford MJ, RiedingerMS, Weingarten S, Krumholz HM. Can practice guide-lines be transported effectively to different settings?Results from a multicenter interventional study. JtComm J Qual Improv. 2001; 27:42–53.

8. Katz DA, Aufderheide TP, Bogner M, et al. The im-pact of unstable angina guidelines in the triage ofemergency department patients with possible acutecoronary syndrome. Med Decis Making. 2006; 26:606–16.

9. Cochran WG, Cox GM. Experimental Design. 2nd ed.New York, NY: Wiley, 1979.

10. Donner A, Birkett N, Buck C. Randomization by clus-ter. Sample size requirements and analysis. Am JEpidemiol. 1981; 114:906–14.

11. Murray DM. The Design and Analysis of Group Ran-domized Trials. Oxford, England: Oxford UniversityPress, 1998.

12. Donner A. Some aspects of the design and analysis ofcluster randomization trials. Appl Stat. 1998; 47:95–113.

13. Campbell MK, Fayers PM, Grimshaw JM. Determi-nants of the intracluster correlation coefficient incluster randomized trials: the case of implementationresearch. Clin Trials. 2005; 2:99–107.

14. Campbell MK, Thomson S, Ramsay CR, MacLennanGS, Grimshaw JM. Sample size calculator for clusterrandomized trials. Comput Biol Med. 2004; 34:113–25.

15. Campbell M, Grimshaw J, Steen N. Sample size cal-culations for cluster randomised trials. ChangingProfessional Practice in Europe Group (EU BIOMEDII Concerted Action). J Health Serv Res Policy. 2000;5:12–6.

16. Rice N, Leyland A. Multilevel models: applications tohealth data. J Health Serv Res Policy. 1996; 1:154–64.

17. Eccles M, Grimshaw J, Steen N, et al. The design andanalysis of a randomized controlled trial to evaluatecomputerized decision support in primary care: theCOGENT study. Fam Pract. 2000; 17:180–6.

18. Campbell MK, Elbourne DR, Altman DG. CONSORTstatement: extension to cluster randomised trials. BrMed J. 2004; 328:702–8.

19. Yealy DM, Auble TE, Stone RA, et al. Effect of in-creasing the intensity of implementing pneumoniaguidelines: a randomized, controlled trial. Ann InternMed. 2005; 143:881–94.

20. Giacomini MK, Cook DJ. Users’ guides to the medicalliterature: XXIII. Qualitative research in health careB. What are the results and how do they help mecare for my patients? Evidence-Based MedicineWorking Group. JAMA. 2000; 284:478–82.

21. Designing theoretically-informed implementationinterventions. Implement Sci. 2006; 1:4.

22. Ajzen I. The theory of planned behavior. OrganBehav Hum Decision Process. 1991; 50:179–211.

23. Rogers EM. Diffusion of Innovations. 3rd ed. NewYork: Free Press, 1983.