meta-analysis concepts and...

Post on 28-Jan-2019

215 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Meta-AnalysisConcepts and Applications

Michael BorensteinHannah Rothstein

Table of ContentsIntroduction Slides 3- 15Goals Slides 16-17Effect Sizes Slides 18-67Fixed Effect Computations Slides 68-110Fixed Effects vs. Random

Effects Slides 111-133

Acknowledgments

Development funded by NIH

• National Institute of Mental Health• National Institute on Aging• National Institute of Drug Abuse

What is the goal of a meta-analysis?

When effect is consistent

• Provide precise estimate of the effect• Report if it is robust across range of

populations

When the effect varies

• May be used to qualify the mean effect• May make the mean effect irrelevant• May be of more interest than the

combined effect• What factors may explain the variation

Why perform a meta-analysis?

Streptokinase

• Meta-analysis in 1977 could have been definitive

• Additional 40,000 patients randomized• Additional ???? Patients not treated• Even in 1992, narrative review was not

definitive• Without meta-analysis, studies could have

continued indefinitely

Forest plot

• Transparent• A mechanism for understanding the

statistics• A mechanism for communicating the

statistics

Goals of meta-analysis

Assigning weights

• Compute combined effect• Assess heterogeneity• Use to qualify combined effect• Focus on heterogeneity

Steps

• Show sample• Show effects• Show weights• Show combined effect• How compute effects• How compute weights• How compute combined effect

Steps

• Show heterogeneity• How compute heterogeneity• Statistical implications• Practical implications

Goals of meta-analysis

Assigning weights

• Compute combined effect• Assess heterogeneity• Use to qualify combined effect• Focus on heterogeneity

Computing effect sizeand variance

Effect size

Effect size AND precision

Reporting precision

• Standard error• Confidence interval• Variance

Precision

• In primary study, qualifies the effect size• In meta-analysis, is used to assign weight

to effect size

Continuous data

• Start with means and SD• Raw difference• Standardized mean difference (d)• Bias-corrected standard difference (G)

Means and SD’s

5020100Control

5020110Treated

NSDMean

Raw mean difference

• Natural scale• Well known scale• All studies on same scale

Raw mean difference

1 2MeanDifference Mean Mean= −

2 21 1 2 2

1 2

( 1) * ( 1) *2Pooled

N SD N SDSDN N

− + −=

+ −

1 2

1 1 *MeanDifference PooledSE SDN N

= +

Raw mean difference

110 100 10MeanDifference = − =

2 2(50 1) *10 (50 1) *10 2050 50 2PooledSD − + −

= =+ −

1 1 * 20 4.050 50MeanDifferenceSE = + =

Raw mean difference

• Effect size – difference in means• Precision – SD within group, N

Raw mean difference

Raw mean difference

Standardized mean difference

• Proprietary scales • Multiple scales

Standardized mean difference (d)

Within

MeanDifferencedSD

=

21 2

1 2

1/ 1/2 * ( )dN N dSE

N N+ +

=+

Standardized mean difference (d)

10 0.5020

d = =

21/ 50 1/ 50 0.5 .2032 * (50 50)dSE + +

= =+

Standardized mean difference

• Effect size – Difference in means relative to SD within groups

• Precision – Sample size

Standardized mean difference (d)

Standardized mean difference (d)

Bias-corrected d (Hedges g)

314 * 1

Jdf

= −−

*G d J=

*SE SEG D J=

Bias-corrected d (Hedges g)

31 .9924 * 98 1

J = − =−

0.500 * 0.992 0.496G = =

0.203 * 0.992 0.202SEG = =

Bias-corrected d (Hedges g)

Bias-corrected d (Hedges g)

Multiple indices

Other data types

• Correlation• Survival• Events by person/years• One-armed studies• Generic indices

Study design and precision

• Independent groups vs. matched designs• Effect size is the same• Precision is different• Can combine in analysis

Data format

• Back-compute effect size and variance• Test statistics or p-values• Confidence limits

Compute d from p-value

Compute d from p-value

Compute SEOdds ratio from CI

Compute SEOdds ratio from CI

Multiple data formats

Caveat

• These slides are meant as a general introduction.

• They do not deal with special cases such as empty cells.

• They do not address variations in computational formulas.

Binary data

• Start with 2x2 table• Odds ratio• Risk ratio• Risk difference

2 x 2 Table

N2DCControl

N1BATreated

Non-EventsEvents

2 x 2 Table

1008812Control

100928Treated

Non-EventsEvents

Log odds ratio

( ) ADLog OddsRatio LogBC

⎛ ⎞= ⎜ ⎟⎝ ⎠

( )1 1 1 1

Log OddsRatioSEA B C D

= + + +

Log odds ratio

8 * 88( ) 0.45092 *12

Log OddsRatio Log ⎛ ⎞= = −⎜ ⎟⎝ ⎠

( )1 1 1 1 0.4808 92 12 88Log OddsRatioSE = + + + =

Log odds ratio

Log odds ratio

Log risk ratio

1

2

/( )/

A NLog RiskRatio LogC N⎛ ⎞

= ⎜ ⎟⎝ ⎠

( )/ /

Log RiskRatioB A D CSEB A D C

= ++ +

Log risk ratio

8 /100( ) 0.40512 /100

Log RiskRatio Log ⎛ ⎞= = −⎜ ⎟⎝ ⎠

( )92 / 8 88 /12 0.43492 8 88 12Log RiskRatioSE = + =

+ +

Log risk ratio

Log risk ratio

Risk Difference

A CRDA B C D

= −+ +

1 1 2 2* *RD

P Q P QSEA B C D

⎛ ⎞= +⎜ ⎟+ +⎝ ⎠

//

i i i

i i i

p Events Nq NonEvents N

=

=

Risk Difference

8 12 0.0408 92 12 88

RD = − = −+ +

.08 * .92 .12 * .88 0.4348 92 12 88RDSE ⎛ ⎞= + =⎜ ⎟+ +⎝ ⎠

Risk Difference

Risk Difference

Multiple indices

Fixed effect computations

Assigning weights

• To get most precise effect• To give more weight to the more precise

studies

Assign weight to each study

• Weight by 1/variance, or the “Inverse variance”

1i

i

wv

=wi=Study weight

vi=Study variance

Combined mean

1

1

ˆ

k

i ii

k

ii

w y

wθ =

=

=∑

wi=Study weight

yi=Study mean

Variance of combined mean

1

1ˆ( ) k

ii

Varw

θ

=

=

∑wi=Study weight

Var(θ)=Variance of combined mean

Test of the null

0ˆ( )

ˆ( )Z

Var

θ θ

θ

−=

Example using Excel

Enter the summary data

Compute effect size and variancefor each study

Assign weight to each study

Compute combined effect

Variance of combined effect

Same example in CMA

Enter summary data

Compute effect size

Display formula

Combined effect and variance

Weights

Combined effect and variance

Combined effect and variance

More information leads to greater precision

Increase the N within studies

N=50 per group

N=100 per group

Increase the number of studies

Number studies = 3

Number studies = 6

More precise studies are given more weight

Same N for each study

Same N for each study

Same N for each study

N varies by study

N varies by study

N varies by study

Effect size pulled by larger study

d moved from .48 to .66

Relative weights in forest plot

Study name Std diff in means and 95% CIStd diff Standard

in means error

A 0.400 0.202B 0.250 0.201C 0.800 0.208

0.476 0.117

-2.00 -1.00 0.00 1.00 2.00

Favours A Favours B

Example 01

Meta Analysis

Study name Std diff in means and 95% CIStd diff Standard

in means error

A 0.400 0.202B 0.250 0.201C 0.800 0.093

0.658 0.078

-2.00 -1.00 0.00 1.00 2.00

Favours A Favours B

Example 02

Meta Analysis

References

• Hedges and Olkin• Lipsey and Wilson

Files available by e-mail

• Standardized difference. xls• Standardized difference. cma

Fixed effect vs.Random effects

Fixed vs. Random

• Concept• Definition• How weights affect

– Combined value– Confidence interval width

• Which should we use

Concept

• Fixed effect model– Common population– Effect size varies only because of random

error• Random effects model

– Multiple populations– Effect size will vary because of random error– Effect size will vary because of true variation

Definition of combined effect

• Fixed effect model– There is one true effect.– Combined effect is estimate of this value.

• Random effects model– There are a series of effects.– Combined effect is average of a series of

values.

Fixed vs. Random

i i iT µ ξ ε= + +

i iT µ ε= +

Factors affecting Tau-squared

When Tau2 is zero

• Random effects model reduces to the fixed effect model.

Weights

• Fixed effect– One true effect– All variation is random error– Largely ignore the smaller studies

• Random effects– Range of effects– Each study provides information about a different

population– Cannot ignore small studies, nor give too much

weight to large studies

Fixed effect model

Within-Study Error

Total Variance

+ =

Random effects model

Within-Study Error

Total Variance

+ =Between-study

variance

Extreme effect in large study

Extreme effect in small study

Study name Statistics for each study Odds ratio and 95% CI

Odds Lower Upper ratio limit limit

Morton 0.436 0.038 5.022Rasmussen 0.348 0.154 0.783Smith 0.278 0.057 1.357Abraham 0.957 0.058 15.773Feldstedt 1.250 0.479 3.261Shechter-89 0.090 0.011 0.736Ceremuzynski 0.278 0.027 2.883Berschat 0.304 0.012 7.880Singh 0.499 0.174 1.426Pereira 0.110 0.012 0.967Golf 0.427 0.127 1.436Thogersen 0.452 0.133 1.543LIMIT-2 0.741 0.556 0.988Shechter-95 0.208 0.067 0.640ISIS-4 1.059 0.996 1.127MAGIC 1.003 0.873 1.152

0.712 0.564 0.9000.01 0.1 1 10 100

Favours A Favours B

Magnesium Fixed effect

Meta Analysis

Study name Statistics for each study Odds ratio and 95% CI

Odds Lower Upper ratio limit limit

Morton 0.436 0.038 5.022Rasmussen 0.348 0.154 0.783Smith 0.278 0.057 1.357Abraham 0.957 0.058 15.773Feldstedt 1.250 0.479 3.261Shechter-89 0.090 0.011 0.736Ceremuzynski 0.278 0.027 2.883Berschat 0.304 0.012 7.880Singh 0.499 0.174 1.426Pereira 0.110 0.012 0.967Golf 0.427 0.127 1.436Thogersen 0.452 0.133 1.543LIMIT-2 0.741 0.556 0.988Shechter-95 0.208 0.067 0.640ISIS-4 1.059 0.996 1.127MAGIC 1.003 0.873 1.152

1.016 0.961 1.0730.01 0.1 1 10 100

Favours A Favours B

Magnesium Fixed effect

Meta Analysis

Key idea

• Relative weights assigned under random effects will be more balanced than those assigned under fixed effects.

• As we move from fixed effect to random effects, extreme studies will lose influence if they are large, and will gain influence if they are small.

Confidence interval width

• Both models include within-study variance.• Random effects model includes also

between-study variance.• Therefore, the confidence interval for the

random effects model will always be as wide or wider than for the fixed effect model.

Study name Std diff in means and 95% CI

Std diff Standard in means error

A 0.400 0.001B 0.400 0.001C 0.400 0.001D 0.400 0.001E 0.400 0.001

0.400 0.000

-1.00 -0.50 0.00 0.50 1.00

Favours A Favours B

Fixed effect model with huge N

Meta Analysis

Study name Std diff in means and 95% CI

Std diff Standard in means error

A 0.400 0.001B 0.450 0.001C 0.350 0.001D 0.450 0.001E 0.350 0.001

0.400 0.022

-1.00 -0.50 0.00 0.50 1.00

Favours A Favours B

Random effects model with huge N

Meta Analysis

Which model should we use?

Fixed effect

• If there is reason to believe that all the studies are functionally identical

• Our goal is to compute the common effect size, which would then be generalized to other examples of this same population.

• Example, of drug company has run five studies to assess the effect of a drug.

Random effects

• When not likely that all the studies were functionally equivalent.

• When the goal of this analysis is to generalize to a range of populations.

Choice of model should not be based on significance test

• Practical issue– Type-II error

• Fundamental issue– The difference between fixed and random

effects is really conceptual

Common (incorrect) wisdom about significance tests

• “Significance test for the effect size will always be more significant using the fixed effect model” rather than the random effects model.

• Is not true• In any event, should never be a factor in

selecting a computational model.

Criticisms of meta-analysis

Meta-Analysis – Concepts and Applications

SCT Orlando May 21, 2006Michael Borenstein and Julian Higgins

Meta-AnalysisConcepts and Applications

Michael Borenstein and Julian HigginsWorkshops Chairman Domenic RedaSCT Orlando May 21, 2006Additional materials available at www.Meta-

Analysis.comQuestions to MichaelB@Meta-Analysis.com

top related