critically appraise evidence based findings

27
Critically Appraise Evidence-Based Findings

Upload: barrycrna

Post on 07-May-2015

1.239 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Critically appraise evidence based findings

Critically Appraise Evidence-Based Findings

Page 2: Critically appraise evidence based findings
Page 3: Critically appraise evidence based findings

Objectives

1)Develop an understanding of the meaning of critical appraisal of evidence

2)Describe how levels of evidence are used in appraisal of evidence

Page 4: Critically appraise evidence based findings

Evidence Based Practice

•Uses highest quality of knowledge in providing care to produce the greatest impact on health status and health care

Page 5: Critically appraise evidence based findings

Critical Appraisal of Evidence

•Key characteristic of evidence based practice

•Core skill needed to use evidence to support nursing practice decisions

Page 6: Critically appraise evidence based findings

Critical Appraisal of Evidence

•Ensures relevance and transferability of evidence from the search to the specific population for whom the care will be provided

Page 7: Critically appraise evidence based findings

Critical Appraisal of Evidence Defined•1) Assessing the strength of the

scientific evidence •2) Evaluating the research for its

quality and applicability to health care decision making

Page 8: Critically appraise evidence based findings

1) Strength of Evidence

•Grading of strength of evidence should incorporate:

•Quality▫The extent to which bias was minimized

(internal validity) •Quantity

▫The extent of the magnitude of effect, numbers of studies, and sample size or power.

•Consistency▫The extent to which similar and different

study designs report similar findings

Page 9: Critically appraise evidence based findings

1) Strength of Evidence

•Evidence exists on a continuum of rigor•Amount of research attention or maturity

of science varies, therefore evidence varies

•Type of research design reflects the strength of the evidence – known as levels of evidence

Page 10: Critically appraise evidence based findings

Levels of Evidence

•Ranking as to how well the evidence informs clinical interventions

•The stronger the level of evidence, the greater the confidence that the probability of applying the evidence in practice will be effective

•Levels of evidence are based on research design

Page 11: Critically appraise evidence based findings

Levels of Evidence

•Experts have developed a number of taxonomies to rate strength of evidence

•Most are organized around research designs

Page 12: Critically appraise evidence based findings
Page 13: Critically appraise evidence based findings

Levels of Evidence

• National Guidelines Clearinghouse• Ia Evidence obtained from meta-analysis or systematic

review of randomized controlled trials• Ib Evidence obtained from at least one randomized

controlled trial• IIa Evidence obtained from at least one well-designed

controlled study without randomization• IIb Evidence obtained from at least one other type of

well-designed quasi-experimental study, without randomization

• III Evidence obtained from well-designed non-experimental descriptive studies, such as comparative studies, correlation studies, and case studies

• IV Evidence obtained from expert committee reports or opinions and/or clinical experiences of respected authorities

Page 14: Critically appraise evidence based findings

Levels of Evidence• “Rating System for the Hierarchy of Evidence” • Level I: Evidence from a systematic review or meta-

analysis of all relevant randomized controlled trials (RCTs), or evidence based clinical practice guidelines based ons systematic reviews of RCTs

• Level II: Evidence obtained from at least one well-designed RCT

• Level III: Evidence obtained from well-designed controlled trials without randomization (quasi-experimental)

• Level IV: Evidence from well-designed case-control and cohort studies (studies of prognosis)

• Level V: Evidence from systematic reviews of descriptive and qualitative studies

• Level VI: Evidence form a single descriptive or qualitative study

• Level VII: Evidence from the opinion of authorities and/or reports of expert committees

(Melnyk & Fineout-Overholt, 2005)

Page 15: Critically appraise evidence based findings

Levels of Evidence• RATING SYSTEM FOR LEVELS OF EVIDENCE• Type of evidence• I. Meta analysis or comprehensive systematic review of multiple

experimental research studies (Cochrane , National Guidelines Clearinghouse (AHRQ), The Joanna Briggs Institute, Other groups)

• II. Well designed experimental study• III. Well designed quasi-experimental study (Non-randomized controlled,

Single group pre-post design, Cohort, Time series (one group of subjects over time), Matched case-controlled studies (two or more groups are matched on certain variables)

• IV. Well designed non-experimental study (Correlational or comparative descriptive studies, Case study design, Qualitative studies)

• V. Clinical examples and expert opinion (Text books, Non-research journal articles, Verbal report, Non-research based professional standards/guidelines/

• group article) • Strength of evidence• A. Type I evidence or consistent findings from multiple studies from levels II,

III, or IV.• B. Multiple studies with evidence types II, III, or IV that are generally

consistent.• C. Multiple studies with evidence types II, III, or IV that are inconsistent.• D. Limited research evidence or one type II study only.• E. Type IV or V evidence only•  •  •  •  

Adapted from Joanna Briggs Institute and AHCPREilers & Heerman, 2005

Page 16: Critically appraise evidence based findings

Level of Certainty

Description

High The available evidence usually includes consistent results from well-designed, well conducted studies in representative primary care populations. Thee studies assess the effects of the preventive service on health outcomes. This conclusion is therefore unlikely to be strongly affected by the results of future studies.

Moderate The available evidence is sufficient to determine the effects of the preventive service on health outcomes, but confidence in the estimate is constrained by such factors as:

• The number, size, or quality of individual studies• Inconsistency of findings across individual studies• Limited generalizability of findings to routine primary care practice • Lack of coherence in the chain of evidence

As more information becomes available, the magnitude or direction of the observed effect could change, and this change may be large enough to alter the conclusion.

Low The available evidence is insufficient to assess effects on health outcomes. Evidence is insufficient because:

• The limited number or size of studies• Important flaws in study design or methods• Inconsistency of findings across individual studies• Gaps in the chain of evidence• Findings not generalizable to routine primary care practices• Lack of information on important health outcomes

More information may allow estimation of effects on health outcomes

Page 17: Critically appraise evidence based findings

Systematic Reviews

▫Provides state of the science conclusions about evidence supporting benefits and risks of a given healthcare practice (Stevens, 2001)

▫Most powerful and useful evidence available

▫Refers to summary that uses a rigorous scientific approach to combine results from a body of original research studies into a clinically meaningful whole

Systematic Reviews & Meta Analysis

Page 18: Critically appraise evidence based findings

Meta-Analysis

•Statistical approach to synthesizing the results of a number of studies – summarizes results of all studies included in the review

•Produces a larger sample size and thus greater power to determine the true magnitude of an effect, yields a summary statistic Systematic Reviews

& Meta Analysis

Page 19: Critically appraise evidence based findings

Randomized Controlled Trial ▫Experimental studies are the gold standard

of research design (randomization of participants to treatment and control, rigorous methods used to minimize bias)

▫Provides most valid, dependable research conclusion about clinical effectiveness of an intervention and establishing cause and effect

▫Allows us to say with a high degree of certainty that the intervention we used was the cause of the outcome

Systematic Reviews & Meta Analysis

Randomized Controlled Trials

Page 20: Critically appraise evidence based findings

Quasi-Experimental

▫Differs from RCT’s only in that participants are NOT randomized to treatment and control groups

Systematic Reviews & Meta Analysis

Randomized Controlled Trials

Quasi-Experimental

Page 21: Critically appraise evidence based findings

Non-Experimental▫Cohort – participants are studied over time,

study population shares common characteristics▫Case-Control – studies that address questions

about harm or causation, investigates why some people develop a disease or behave the way they do vs others who do not

▫Descriptive – main objective is to describe some phenomena

▫Qualitative - "any kind of research that produces findings not arrived at by means of statistical procedures or other means of quantification" (Strauss and Corbin, 1990, p. 17).

Systematic Reviews & Meta Analysis

Randomized Controlled Trials

Quasi-Experimental

Non-Experimental

Page 22: Critically appraise evidence based findings

. Clinical Examples & Expert Opinion

▫Expert Opinion – arriving at a value judgement which incorporates the main information available on the subject as well as previous experiences

▫Clinical examples – ▫The “5 rights”

Systematic Reviews & Meta Analysis

Randomized Controlled Trials

Quasi-Experimental

Non-Experimental

Clinical Examples & Expert Opinion

Page 23: Critically appraise evidence based findings

2) Evaluating Quality & Applicability•What are the results?•Are the results valid?•Can the results be applied to the targeted

population and/or public health practice and intervention?

Page 24: Critically appraise evidence based findings

What are the results?

•Were the results similar from study to study (if systematic review or meta-analysis)?

•What are the overall results?

•How precise were the results?

•Can a causal relationship be inferred from the data?

Page 25: Critically appraise evidence based findings

Are the Results Valid?

•Does this article explicitly address our public health question?

•Was the search for our article detailed and exhaustive? Is it likely that important, relevant studies were missed?

•Does the study selected appear to be of high methodological quality?

•Do you feel the study selected is reproducible?

Page 26: Critically appraise evidence based findings

Is the Evidence Applicable?

•How can the results be interpreted and applied to public health practice and intervention?

•Are study subjects similar to clients to whom care is to be delivered?

•Were all important outcomes considered?

•Are the benefits worth the costs and potential risks?

Page 27: Critically appraise evidence based findings

Other Methods Used to Appraise Evidence• Statistical Evaluation, for example calculating

effect size• Effect size measures the magnitude or strength of

the treatment or intervention effect (how well the intervention worked in the group who received the intervention vs the group that did not receive the intervention)

• Small, medium and large effects are designated as .2, .5, and .8 respectively

• Several formulas to use depending on statistical analysis used (e.g.; t-tests, etc)

• Thalheimer, W., & Cook, S. (2002, August). How to calculate effect sizes from published research articles: A simplified methodology. Retrieved April 29, 2009 from http://www.work-learning.com/white_papers/effect_sizes/Effect_Sizes_pdf5.pdf