introduction to systematic review part 2 introduction to systematic review part 2 david samson, ms...

33
Introduction to Introduction to Systematic Review Systematic Review Part 2 Part 2 David Samson, MS David Samson, MS Director, Comparative Effectiveness Director, Comparative Effectiveness Research Research Technology Evaluation Center Technology Evaluation Center Blue Cross and Blue Shield Association Blue Cross and Blue Shield Association February 18, 2010 February 18, 2010 Copyright 200 Blue Cross Blue Shield Associat

Upload: mitchell-simpson

Post on 26-Dec-2015

237 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

Introduction to Systematic ReviewIntroduction to Systematic ReviewPart 2Part 2

David Samson, MSDavid Samson, MS

Director, Comparative Effectiveness ResearchDirector, Comparative Effectiveness Research

Technology Evaluation CenterTechnology Evaluation Center

Blue Cross and Blue Shield AssociationBlue Cross and Blue Shield Association

February 18, 2010February 18, 2010 Copyright 200 Blue Cross Blue Shield Association

Page 2: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

2

Resources for Learning Systematic ReviewResources for Learning Systematic Review

• Cochrane Handbook for Systematic Reviews of Interventions, 2008.

• AHRQ Guide for Conducting Comparative Effectiveness Reviews (CERs), 2007

– Journal of Clinical Epidemiology articles• Overview: Slutsky et al. 2008, Helfand & Balshem 2009

• Topic development: Whitlock et al. 2009

• Harms: Chou et al. 2009

• Strength of Evidence: Owens et al. 2009

• AHRQ Systematic Review Learning Modules

Copyright 2010 Blue Cross Blue Shield Association

Page 3: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

3

OverviewOverview

• Review of training session 1

• Topic refinement

• Search results screening exercise

• Data abstraction exercise

• Engauge Digitizer demonstration

• Questions

Page 4: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

4

• SR Module 1SR Module 1: Refining Key Questions: Refining Key Questions

• SR Module 2SR Module 2: Analytic Frameworks: Analytic Frameworks

• SR Module 3SR Module 3: Study Eligibility Criteria: Study Eligibility Criteria

• SR Module 4: Searching for Relevant StudiesSR Module 4: Searching for Relevant Studies

• SR Module 5SR Module 5: Selecting Evidence – Observational Studies: Selecting Evidence – Observational Studies

• SR Module 6SR Module 6: Data Abstraction: Data Abstraction

• SR Module 7SR Module 7: Rating the Quality of Individual Studies: Rating the Quality of Individual Studies

Copyright 2010 Blue Cross Blue Shield Association

AHRQ Systematic Review Methods Learning ModulesAHRQ Systematic Review Methods Learning Modules

Page 5: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

5

Illustration of AHRQ ProcessesIllustration of AHRQ Processes

Develop topic:

Identify and triage topics

based on appropriateness,

importance, desirability of new research/duplication,

feasibility, and potential impact.

Refine topic:

Identify patient, intervention, comparator,

outcomes, timing and setting for

each topic.

Review topic:

Prepare topic, search for and select studies, abstract data, analyze and

synthesize data, present findings.

Engage stakeholders in clarifying areas

for future research:

Identify future research needs to inform real-world

healthcare decisions

Copyright 2010 Blue Cross Blue Shield Association

Page 6: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

6

PICOTS/Analytic Framework/Key QuestionsPICOTS/Analytic Framework/Key Questions

• Population—Who is being evaluated?

• Intervention—What intervention is being evaluated?

• Comparator—What is the intervention being compared with?

• Outcomes—What are the benefits and harms being evaluated?

• Timing—What is the follow-up time?

• Setting—What are the settings of interest?

• Analytic Framework: clinical logic, intermediate→health outcomes

• Key Question: for population of interest, what are the comparative effects of intervention and comparator in terms of specific outcomes (in given setting, over specified period of time)?

Copyright 2010 Blue Cross Blue Shield Association

Page 7: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

7

Project Management/Task Order RequirementsProject Management/Task Order Requirements

Project phases:

• I: Start/Work Plan

• II: Search/Screen/Select Evidence

• III: Data Abstraction/Quality Assessment/EvidenceTables

• IV: Evidence Synthesis/Drafting Report

• V: Peer Review/Final Report

• VI: Follow-up

Copyright 2010 Blue Cross Blue Shield Association

Page 8: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

8

Phase II: Search/Screen/Select EvidencePhase II: Search/Screen/Select Evidence

• Search databases

• Develop search screening form

• Screen search results

• Retrieve/select articles

Copyright 2010 Blue Cross Blue Shield Association

Page 9: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

9

Phase III: Data Abstraction/Quality Phase III: Data Abstraction/Quality Assessment/ Evidence TablesAssessment/ Evidence Tables

• Develop data abstraction forms

• Abstract data into tables

• Record reasons for excluding studies

• Rater1 assesses study quality

• Fact-checking of data abstraction

• Rater2 assesses study quality/reconciles with Rater 1

• Prepare summary evidence tables

• 2nd TEP call (optional)

• Submit TEP summary

Copyright 2010 Blue Cross Blue Shield Association

Page 10: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

10

SR Modules 1, 2: Refining KQs, AFsSR Modules 1, 2: Refining KQs, AFs

• Outputs: – PICOTS

– Analytic framework

– Key Questions

• Objectives– Relevance (importance/potential value) to stakeholders/decision

makers• Do the KQs target stakeholder concerns?

– Feasibility of scope• Will the KQs fit the time/effort to be devoted to the systematic review

(too much, too little)?

Copyright 2010 Blue Cross Blue Shield AssociationWhitlock et al. 2009

Page 11: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

11

SR Modules 1, 2: Refining KQs, AFsSR Modules 1, 2: Refining KQs, AFs

• Topic refinement starting point: – Effective Health Care Topic Triage Cover Sheet

• Topic refinement destination:– Topic Refinement public posting document

• Example:– Hematopoietic Stem-cell Transplantation (HSCT) in Pediatric

Patients (originally, “Bone Marrow Transplants (BMT) in Children”)

Copyright 2010 Blue Cross Blue Shield Association

Page 12: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

12

SR Modules 3, 5: Selecting EvidenceSR Modules 3, 5: Selecting Evidence

Prepare topic:

· Refine key questions

· Develop analytic frameworks

Search for and select

studies:

· Identify eligibility criteria

· Search for relevant studies

· Select evidence for inclusion

Abstract data:

· Extract evidence from studies

· Construct evidence tables

Analyze and synthesize data:

· Assess quality of studies

· Assess applicability of studies

· Apply qualitative methods

· Apply quantitative methods (meta-analyses)

· Rate the strength of a body of evidence

Present findings

Page 13: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

13

Selecting Evidence: Study Eligibility CriteriaSelecting Evidence: Study Eligibility Criteria

• Study inclusion/exclusion criteria determined by:– PICOTS

– Decisions about study design (RCTs sought routinely, other designs sought based on weighing of pros and cons)

• Study quality filter vs including all quality levels?

• Foreign language articles?

• Published articles vs gray literature? (conference proceedings, registries, manufacturer data, FDA data)

Copyright 2010 Blue Cross Blue Shield Association

Page 14: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

14

Selecting Evidence: Screening Search ResultsSelecting Evidence: Screening Search Results

• Create search review guide for screening titles and abstracts

• Enter codes for: KQ, study design, disease/condition, interventions, sample size, outcomes, retrieval decision, selection decision

• Groups of database records my be divided among individual screeners

• Uncertain records reconciled with second screener

• Step can consume large amounts of project time

• Retrieval decision should be conservative (err on being inclusive, especially when info in title/abstract insufficient to exclude with confidence)

Copyright 2010 Blue Cross Blue Shield Association

Page 15: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

15

Selecting Evidence: Screening ArticlesSelecting Evidence: Screening Articles

• Does article meet selection criteria?

• Groups of articles may be divided among individual screeners

• Primary focus on methods section

• Uncertain articles reconciled with second screener

• Reasons for exclusion must be recorded

Copyright 2010 Blue Cross Blue Shield Association

Page 16: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

16

Radiotherapy Treatments for Head and Neck CancerRadiotherapy Treatments for Head and Neck Cancer

• KQ1. What is the comparative effectiveness of IMRT, 3DCRT, 2DRT, and proton beam therapy regarding adverse events and quality of life?

• KQ2. What is the comparative effectiveness of IMRT, 3DCRT, 2DRT, and proton beam therapy regarding tumor control and patient survival?

Copyright 2010 Blue Cross Blue Shield Association

Page 17: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

17

RT HNC Study Selection CriteriaRT HNC Study Selection Criteria

Types of Studies: Studies were included for Key Question 1 and Key Question 2 if they were:

• Randomized trials, nonrandomized comparative studies, or single-arm intervention studies, that:

– reported on an outcome of interest specifically among patients with head and neck cancer;

– involved an intervention of interest, excluding noncomparative studies describing use of 2DRT (defined below) only;

– reported results separately in individual patient groups according to radiation therapy modality received, except for proton beam therapy, where the results of photon and proton therapy may be combined;

– reported tumor control data compiled separately according to tumor site, or included a multivariable analysis that controlled for anatomic location and evaluated the impact of type of radiotherapy on tumor control outcomes.

Copyright 2010 Blue Cross Blue Shield Association

Page 18: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

18

RT HNC Study Selection CriteriaRT HNC Study Selection Criteria

Types of Studies: Studies were included for Key Question 1 and Key Question 2 if they were:

• Single-arm studies with 25 or more evaluable patients that adhere to all aforementioned criteria and provide descriptive information on tumor characteristics particularly location and histology. Single-arm (noncomparative) studies of 2DRT were excluded because this radiotherapy technique is currently little practiced. Studies had to use the same type of radiotherapy for boost as for the planning treatment volume; 2DRT or electrons could be used in the lower neck.

• Dose planning studies that did not report any outcome of interest were not included.

Copyright 2010 Blue Cross Blue Shield Association

Page 19: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

19

RT HNC Study Selection CriteriaRT HNC Study Selection Criteria

Types of Participants:

• Head and neck cancers included:– larynx;

– pharynx (hypopharynx, oropharyx, and nasopharynx);

– lip and oral cavity;

– paranasal sinus and nasal cavity;

– salivary gland; and

– occult primary of the head and neck

Copyright 2010 Blue Cross Blue Shield Association

Page 20: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

20

RT HNC Study Selection CriteriaRT HNC Study Selection Criteria

Types of Participants:

• Other cancers excluded:– brain tumors;

– skull base tumors;

– uveal/choroidal melanoma, other ocular and eyelid tumors;

– otologic tumors;

– cutaneous tumors of the head and neck (including melanoma);

– thyroid cancer;

– parathyroid cancer;

– esophageal cancer; and

– tracheal tumors.

Copyright 2010 Blue Cross Blue Shield Association

Page 21: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

21

RT HNC Study Selection CriteriaRT HNC Study Selection Criteria

Treatment Setting: – Primary (definitive): radiotherapy only (no surgery, with or

without chemotherapy)

– Preoperative radiotherapy: radiotherapy before surgery.(with or without chemotherapy)

– Postoperative (adjuvant): radiotherapy after surgery (with or without chemotherapy)

– Reirradiation: radiotherapy after earlier radiotherapy (other treatments irrelevant)

Copyright 2010 Blue Cross Blue Shield Association

Page 22: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

22

RT HNC Study Selection CriteriaRT HNC Study Selection Criteria

Treatment Setting: – Primary (definitive): radiotherapy only (no surgery, with or without

chemotherapy)– Preoperative radiotherapy: radiotherapy before surgery.(with or

without chemotherapy)– Postoperative (adjuvant): radiotherapy after surgery (with or without

chemotherapy)– Reirradiation: radiotherapy after earlier radiotherapy (other treatments

irrelevant)– Concurrent chemoradiotherapy: radiotherapy and chemotherapy at the

same time (with or without surgery)– Post-radiotherapy (adjuvant) chemoradiotherapy: chemotherapy given

after radiotherapy (with or without surgery)– Pre-radiotherapy (neoadjuvant) chemoradiotherapy: chemotherapy

given before radiotherapy (with or without surgery)– Split chemoradiotherapy: chemotherapy given both before and after

radiotherapy (with or without surgery)

Copyright 2010 Blue Cross Blue Shield Association

Page 23: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

23

RT HNC Study Selection CriteriaRT HNC Study Selection Criteria

Types of Interventions: – intensity-modulated radiotherapy (IMRT), defined as any

treatment plan where intensity-modulated radiation beams and computerized inverse treatment planning is used;

– three-dimensional conformal radiotherapy (3DCRT), defined as any treatment plan where CT-based treatment planning is used to delineate radiation beams and target volumes in three dimensions;

– proton beam therapy (PBT), defined as any treatment plan where proton beam radiation is used; and

– conventional two-dimensional radiotherapy (2DRT), defined as treatment planning where only 2D projection radiographs are used to delineate radiation beams and target volumes.

Copyright 2010 Blue Cross Blue Shield Association

Page 24: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

24

RT HNC Study Selection CriteriaRT HNC Study Selection Criteria

Types of Outcomes: Primary (health) outcomes included:– radiation-induced toxicities;– adverse events, both acute and chronic normal tissue toxicity, such as

• xerostomia,• dysphagia;• mucositis, • skin toxicity,• osteoradionecrosis or bone toxicity, and

– effect on quality of life;– clinical effectiveness, including

• local and locoregional control,• time to any recurrence (disease-free survival), and• patient (disease-specific and overall) survival.

Secondary (intermediate) outcomes included:– salivary flow and– probability of completing treatment according to protocol.

Copyright 2010 Blue Cross Blue Shield Association

Page 25: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

25

SR Module 5: Data AbstractionSR Module 5: Data Abstraction

• What kind of data to collect?– Guided by KQs/PICOTS

• How much data to collect?– Goldilocks standard

• How to collect data accurately and efficiently?

• What are some challenges in data abstraction?

Copyright 2010 Blue Cross Blue Shield Association

Page 26: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

26

Data Abstraction: Head and Neck Cancer ExampleData Abstraction: Head and Neck Cancer Example

• Critical features of the study identity/design: – authors/institutions/dates– patient inclusion/exclusion criteria– number of participants and flow of participants through steps of study– treatment allocation methods (including concealment)– use of blinding

• Patient characteristics, including: – age– sex– race/ethnicity– disease and stage– tumor histology– tumor size– disease duration– other prognostic characteristics (history of tobacco use, etc.)

Copyright 2010 Blue Cross Blue Shield Association

Page 27: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

27

Data Abstraction: Head and Neck Cancer Data Abstraction: Head and Neck Cancer

• Treatment characteristics, including:– localization and staging methods– computerized treatment planning– radiation delivery source– regimen, schedule, dose, duration of treatment, fractionation, boosts– beam characteristics– immobilization and repositioning procedures– co-intervention details

• Outcome assessment details:– identified primary outcome– secondary outcomes– response criteria– use of independent outcome assessor (important for subjective outcomes)– follow-up frequency and duration

Copyright 2010 Blue Cross Blue Shield Association

Page 28: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

28

Data Abstraction: Head and Neck Cancer Data Abstraction: Head and Neck Cancer

• Data analysis details:– statistical analyses (statistical test/estimation results)

– test used

– summary measures

– sample variability measures

• 6 individual data abstractors:– Original entry by 1 abstractor, fact-checked by 2nd individual

• Form: MS Word tables

• Original data abstraction tables provide material for summary evidence tables

Copyright 2010 Blue Cross Blue Shield Association

Page 29: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

29

Data Abstraction ChallengesData Abstraction Challenges

• Non-uniform outcomes (e.g., different pain measurements in different studies)

• Nonnumeric format: graphs requiring judgment to quantify outcomes

– Drawing tools

– Engauge Digitizer

• Incomplete data (frequent problem: no standard error or confidence interval)

• Discrepant data (different parts of the same report gave different numbers)

• Confusing data (cannot figure out what the authors reported)

• Missing data (only the conclusion is reported)

• Multiple (overlapping) publications of the same study with or without discrepant data

Copyright 2010 Blue Cross Blue Shield Association

Page 30: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

30

SR Module 7: Rating Quality of Individual StudiesSR Module 7: Rating Quality of Individual Studies

• Quality can be defined as “the extent to which all aspects of a study’s design and conduct can be shown to protect against systematic bias, nonsystematic bias, and inferential error” (Lohr, 2004)

• Considered synonymous with internal validity

• Relevant for individual studies, distinct from assessment of risk of bias for a body of evidence

• Studies should be rated by predefined criteria

• AHRQ Guide recommends global ratings of good, fair, poor

Copyright 2010 Blue Cross Blue Shield Association

Page 31: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

31

Quality Rating: RCTs and NRCSsQuality Rating: RCTs and NRCSs

• US Preventive Services Task Force (USPSTF) system (Harris et al. 2001)– Initial assembly of comparable groups: adequate randomization, including

concealment and whether potential confounders (e.g., other concomitant care) were distributed equally among groups

– Maintenance of comparable groups (includes attrition, crossovers, adherence, contamination)

– Important differential loss to follow-up or overall high loss to follow-up

– Measurements: equal, reliable, and valid (includes masking of outcome assessment)

– Clear definition of interventions

– All important outcomes considered

– Analysis: adjustment for potential confounders, intention-to-treat analysis

Copyright 2010 Blue Cross Blue Shield Association

Page 32: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

32

USPSTF Rating SystemUSPSTF Rating System

• Good: Meets all criteria; comparable groups are assembled initially and maintained throughout the study (follow-up at least 80 percent); reliable and valid measurement instruments are used and applied equally to the groups; interventions are spelled out clearly; all important outcomes are considered; and appropriate attention is given to confounders in analysis. In addition, for randomized, controlled trials, intention to treat analysis is used.

• Fair: Studies graded “fair” if any or all of the following problems occur, without the fatal flaws noted in the “poor” category below: In general, comparable groups are assembled initially but some question remains whether some (although not major) differences occurred with follow-up; measurement instruments are acceptable (although not the best) and generally applied equally; some but not all important outcomes are considered; and some but not all potential confounders are accounted for. Intention-to-treat analysis is done for randomized, controlled trials.

• Poor: Studies graded “poor” if any of the following fatal flaws exists: Groups assembled initially are not close to being comparable or maintained throughout the study; unreliable or invalid measurement instruments are used or not applied at all equally among groups (including not masking outcome assessment); and key confounders are given little or no attention. For randomized, controlled trials, intention-to-treat analysis is lacking.

Copyright 2010 Blue Cross Blue Shield Association

Page 33: Introduction to Systematic Review Part 2 Introduction to Systematic Review Part 2 David Samson, MS Director, Comparative Effectiveness Research Technology

33

Questions?Questions?

Copyright 2010 Blue Cross Blue Shield Association