advanced quantitative research methodology, lecture

Post on 30-Dec-2021

26 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Advanced Quantitative Research Methodology, LectureNotes: Matching Methods for Causal Inference1

Gary King

GaryKing.org

March 31, 2013

1 c©Copyright 2013 Gary King, All Rights Reserved.Gary King (Harvard, IQSS) 1 / 66

Overview

Problem: Model dependence (review)

Solution: Matching to preprocess data (review)

Problem: Many matching methods & specifications

Solution: The Space Graph helps us choose

Problem: The most commonly used method can increase imbalance!

Solution: Other methods do not share this problem

(Coarsened Exact Matching is simple, easy, and powerful)

Lots of insights revealed in the process

Gary King (Harvard, IQSS) 2 / 66

Model Dependence ExampleReplication: Doyle and Sambanis, APSR 2000

Data: 124 Post-World War II civil wars

Dependent variable: peacebuilding success

Treatment variable: multilateral UN peacekeeping intervention (0/1)

Control vars: war type, severity, duration; development status; etc.

Counterfactual question: UN intervention switched for each war

Data analysis: Logit model

The question: How model dependent are the results?

Gary King (Harvard, IQSS) 3 / 66

Two Logit Models, Apparently Similar Results

Original “Interactive” Model Modified ModelVariables Coeff SE P-val Coeff SE P-valWartype −1.742 .609 .004 −1.666 .606 .006Logdead −.445 .126 .000 −.437 .125 .000Wardur .006 .006 .258 .006 .006 .342Factnum −1.259 .703 .073 −1.045 .899 .245Factnum2 .062 .065 .346 .032 .104 .756Trnsfcap .004 .002 .010 .004 .002 .017Develop .001 .000 .065 .001 .000 .068Exp −6.016 3.071 .050 −6.215 3.065 .043Decade −.299 .169 .077 −0.284 .169 .093Treaty 2.124 .821 .010 2.126 .802 .008UNOP4 3.135 1.091 .004 .262 1.392 .851Wardur*UNOP4 — — — .037 .011 .001Constant 8.609 2.157 0.000 7.978 2.350 .000N 122 122Log-likelihood -45.649 -44.902Pseudo R2 .423 .433

Gary King (Harvard, IQSS) 4 / 66

Doyle and Sambanis: Model Dependence

Gary King (Harvard, IQSS) 5 / 66

Overview of Matching for Causal Inference

Goal: reduce model dependence

A nonparametric, non-model-based approach

Makes parametric models work better rather than substitute for them(i.e,. matching is not an estimator; its a preprocessing method)

Should have been called pruning (no bias is introduced if pruning is afunction of T and X , but not Y )

Apply model to preprocessed (pruned) rather than raw data

Violates the “more data is better” principle, but that only applieswhen you know the DGP

Overall idea:If each treated unit exactly matches a control unit w.r.t. X , then: (1)treated and control groups are identical, (2) X is no longer aconfounder, (3) no need to worry about the functional form (XT − XC

is good enough).If treated and control groups are better balanced than when youstarted, due to pruning, model dependence is reduced

Gary King (Harvard, IQSS) 6 / 66

Model Dependence: A Simpler Example(King and Zeng, 2006: fig.4 Political Analysis)

What to do?

Preprocess I: Eliminate extrapolation region

Preprocess II: Match (prune bad matches) within interpolation region

Model remaining imbalance

Gary King (Harvard, IQSS) 7 / 66

Remove Extrapolation Region, then Match

Must remove data (selecting on X ) to avoid extrapolation.

Options to find “common support” of p(X |T = 1) and P(X |T = 0)1 Exact match, so support is defined only at data points2 Less but still conservative: convex hull approach

let T ∗ and X ∗ denote subsets of T and X s.t. {1− T ∗, X ∗} fallswithin the convex hull of {T , X}use X ∗ as estimate of common support (deleting remainingobservations)

3 Other approaches, based on distance metrics, pscores, etc.4 Easiest: Coarsened Exact Matching, no separate step needed

Gary King (Harvard, IQSS) 8 / 66

Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

Education (years)

Out

com

e

12 14 16 18 20 22 24 26 28

0

2

4

6

8

10

12

Gary King (Harvard, IQSS) 9 / 66

Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

Education (years)

Out

com

e

12 14 16 18 20 22 24 26 28

0

2

4

6

8

10

12

T

T

T

T T

T

T

TTT

TT

T TT T

T

T

T

T

Gary King (Harvard, IQSS) 10 / 66

Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

Education (years)

Out

com

e

12 14 16 18 20 22 24 26 28

0

2

4

6

8

10

12

T

T

T

T T

T

T

TTT

TT

T TT T

T

T

T

T

CC

C

CC

C

C

C

C

C

C

C

C

C

C

C

C

CC C

C

C

C

C

C

C

C

C

C

C

C

CCC

CC

CC

C

C

Gary King (Harvard, IQSS) 11 / 66

Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

Education (years)

Out

com

e

12 14 16 18 20 22 24 26 28

0

2

4

6

8

10

12

T

T

T

T T

T

T

TTT

TT

T TT T

T

T

T

T

CC

C

CC

C

C

C

C

C

C

C

C

C

C

C

C

CC C

C

C

C

C

C

C

C

C

C

C

C

CCC

CC

CC

C

C

Gary King (Harvard, IQSS) 12 / 66

Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

Education (years)

Out

com

e

12 14 16 18 20 22 24 26 28

0

2

4

6

8

10

12

T

T

T

T T

T

T

TTT

TT

T TT T

T

T

T

T

CC

C

CC

C

C

C

C

C

C

C

C

C

C

C

C

CC C

C

C

C

C

C

C

C

C

C

C

C

CCC

CC

CC

C

C

Gary King (Harvard, IQSS) 13 / 66

Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

Education (years)

Out

com

e

12 14 16 18 20 22 24 26 28

0

2

4

6

8

10

12

T

T

T

T T

T

T

TTT

TT

T TT T

T

T

T

T

CC

C

CC

C

C

C

C

C

C

C

C

C

C

C

C

CC C

C

C

C

C

C

C

C

C

C

C

C

CCC

CC

CC

C

C

Gary King (Harvard, IQSS) 14 / 66

Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

Education (years)

Out

com

e

12 14 16 18 20 22 24 26 28

0

2

4

6

8

10

12

T

T

T

T T

T

T

TTT

TT

T TT T

T

T

T

TC

C

C

C

C

CC

C

C

CC

C CC

C

C

CCCC

C

CC

C

CC

CC

CC

C

C

C

C

CC

CCCC

Gary King (Harvard, IQSS) 15 / 66

Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

Education (years)

Out

com

e

12 14 16 18 20 22 24 26 28

0

2

4

6

8

10

12

T

T

T

T T

T

T

TTT

TT

T TT T

T

T

T

TC

C

C

C

C

CC

C

C

CC

C CC

C

C

CCCC

C

CC

C

CC

CC

CC

C

C

C

C

CC

CCCC

Gary King (Harvard, IQSS) 16 / 66

Matching within the Interpolation Region(Ho, Imai, King, Stuart, 2007: fig.1, Political Analysis)

Matching reduces model dependence, bias, and variance

Gary King (Harvard, IQSS) 17 / 66

Empirical Illustration: Carpenter, AJPS, 2002

Hypothesis: Democratic senate majorities slow FDA drug approvaltime

n = 408 new drugs (262 approved, 146 pending).

lognormal survival model.

seven oversight variables (median adjusted ADA scores for House andSenate Committees as well as for House and Senate floors,Democratic Majority in House and Senate, and DemocraticPresidency).

18 control variables (clinical factors, firm characteristics, mediavariables, etc.)

Gary King (Harvard, IQSS) 18 / 66

Evaluating Reduction in Model Dependence

Focus on the causal effect of a Democratic majority in the Senate(identified by Carpenter as not robust).

omit post-treatment variables.

use one-to-one nearest neighbor propensity score matching.

discard 49 units (2 treated and 17 control units).

run 262,143 possible specifications and calculates ATE for each.

Look at variability in ATE estimate across specifications.

(Normal applications would only do one or a small number ofspecifications.)

Gary King (Harvard, IQSS) 19 / 66

Reducing Model Dependence

−80 −70 −60 −50 −40 −30

0.00

0.05

0.10

0.15

0.20

Estimated in−sample average treatment effect for the treated

Den

sity

Raw data Matcheddata

Point estimate of Carpenter's specification

using raw data

Figure: Histogram of estimated in-sample average treatment effect for the treated(ATT) of the Democratic Senate majority on FDA drug approval time across262, 143 specifications.

Gary King (Harvard, IQSS) 20 / 66

Another Example: Jeffrey Koch, AJPS, 2002

−0.05 0.00 0.05 0.10

010

2030

4050

60

Estimated average treatment effect

Den

sity

Raw data

Matcheddata

Point estimate of raw data

Figure: Estimated effects of being a highly visible female Republican candidateacross 63 possible specifications with the Koch data.

Gary King (Harvard, IQSS) 21 / 66

How Matching Works

Notation:Yi Dependent variableTi Treatment variable (0/1)Xi Pre-treatment covariates

Treatment Effect for treated (Ti = 1) observation i :

TEi = Yi (Ti = 1)−Yi (Ti = 0)

= observed −unobserved

Estimate Yi (0) with Yj from matched (Xi ≈ Xj) controls

Yi (0) = Yj(0) or a model Yi (0) = g0(Xj)

Prune unmatched units to improve balance (so X is unimportant)

QoI: Sample Average Treatment effect on the Treated:

SATT =1

nT

∑i∈{Ti=1}

TEi

or Feasible Average Treatment effect on the Treated: FSATT

Gary King (Harvard, IQSS) 22 / 66

Method 1: Mahalanobis Distance Matching

1 Preprocess (Matching)

Distance(Xi ,Xj) =√

(Xi − Xj)′S−1(Xi − Xj)Match each treated unit to the nearest control unitControl units: not reused; pruned if unusedPrune matches if Distance>caliper

2 Estimation Difference in means or a model

Gary King (Harvard, IQSS) 23 / 66

Mahalanobis Distance Matching

Education (years)

Age

12 14 16 18 20 22 24 26 28

20

30

40

50

60

70

80

Gary King (Harvard, IQSS) 24 / 66

Mahalanobis Distance Matching

Education (years)

Age

12 14 16 18 20 22 24 26 28

20

30

40

50

60

70

80

TTTT

T

T

T

T

T

T

T

T

T

TT

T

T

T

T

T

Gary King (Harvard, IQSS) 25 / 66

Mahalanobis Distance Matching

Education (years)

Age

12 14 16 18 20 22 24 26 28

20

30

40

50

60

70

80

C

C

CC

C

C

C

C

C

CC

C

CCC

CC

C

C

C

CC CC

C

C

CC

C

CC

CC

C

C C

CC

C

C

TTTT

T

T

T

T

T

T

T

T

T

TT

T

T

T

T

T

Gary King (Harvard, IQSS) 26 / 66

Mahalanobis Distance Matching

Education (years)

Age

12 14 16 18 20 22 24 26 28

20

30

40

50

60

70

80

C

C

CC

C

C

C

C

C

CC

C

CCC

CC

C

C

C

CC CC

C

C

CC

C

CC

CC

C

C C

CC

C

C

TTTT

T

T

T

T

T

T

T

T

T

TT

T

T

T

T

T

Gary King (Harvard, IQSS) 27 / 66

Mahalanobis Distance Matching

Education (years)

Age

12 14 16 18 20 22 24 26 28

20

30

40

50

60

70

80

T TT T

TT TT T TTTTT TT

TTTT

CCC C

CC

C

C

C CCC

CC

CC C CC

C

C

CCCCC

CCC CCCCC

C CCCC

C

Gary King (Harvard, IQSS) 28 / 66

Mahalanobis Distance Matching

Education (years)

Age

12 14 16 18 20 22 24 26 28

20

30

40

50

60

70

80

T TT T

TT TT T TTTTT TT

TTTT

CCC C

CC

C

C

C CCC

CC

CC C CC

C

Gary King (Harvard, IQSS) 29 / 66

Mahalanobis Distance Matching

Education (years)

Age

12 14 16 18 20 22 24 26 28

20

30

40

50

60

70

80

T TT T

TT TT T TTTTT TT

TTTT

CCC C

CC

C

C

C CCC

CC

CC C CC

C

Gary King (Harvard, IQSS) 30 / 66

Method 2: Propensity Score Matching

1 Preprocess (Matching)

Reduce k elements of X to scalar πi ≡ Pr(Ti = 1|X ) = 11+e−Xi β

Distance(Xi ,Xj) = |πi − πj |Match each treated unit to the nearest control unitControl units: not reused; pruned if unusedPrune matches if Distance>caliper

2 Estimation Difference in means or a model

Gary King (Harvard, IQSS) 31 / 66

Propensity Score Matching

Education (years)

Age

12 16 20 24 28

20

30

40

50

60

70

80

C

C

CC

C

C

C

C

C

CC

C

CCC

CC

C

C

C

CCCC

C

C

CC

C

CC

CC

C

C C

CC

C

C

TTTT

T

T

T

T

T

T

T

T

T

TT

T

T

T

T

T

Gary King (Harvard, IQSS) 32 / 66

Propensity Score Matching

Education (years)

Age

12 16 20 24 28

20

30

40

50

60

70

80

C

C

CC

C

C

C

C

C

CC

C

CCC

CC

C

C

C

CCCC

C

C

CC

C

CC

CC

C

C C

CC

C

C

TTTT

T

T

T

T

T

T

T

T

T

TT

T

T

T

T

T

1

0

PropensityScore

Gary King (Harvard, IQSS) 33 / 66

Propensity Score Matching

Education (years)

Age

12 16 20 24 28

20

30

40

50

60

70

80

C

C

CC

C

C

C

C

C

CC

C

CCC

CC

C

C

C

CCCC

C

C

CC

C

CC

CC

C

C C

CC

C

C

TTTT

T

T

T

T

T

T

T

T

T

TT

T

T

T

T

T

1

0

PropensityScore

C

C

CC

CCC

C

C

C

CC

C

C

C

C

C

C

C

CCCCC

C

CCCCCCCCC

C

C

C

C

CC

T

TTT

T

TT

T

T

T

T

TT

T

T

T

T

T

T

T

Gary King (Harvard, IQSS) 34 / 66

Propensity Score Matching

Education (years)

Age

12 16 20 24 28

20

30

40

50

60

70

80

1

0

PropensityScore

C

C

CC

CCC

C

C

C

CC

C

C

C

C

C

C

C

CCCCC

C

CCCCCCCCC

C

C

C

C

CC

T

TTT

T

TT

T

T

T

T

TT

T

T

T

T

T

T

T

Gary King (Harvard, IQSS) 35 / 66

Propensity Score Matching

Education (years)

Age

12 16 20 24 28

20

30

40

50

60

70

80

1

0

PropensityScore

C

C

CC

CCC

C

C

C

CC

C

C

C

C

C

C

C

CCCCC

C

CCCCCCCCC

C

C

C

C

CC

CCC

C

CCC

C

CCC

CCCC

T

TTT

T

TT

T

T

T

T

TT

T

T

T

T

T

T

T

Gary King (Harvard, IQSS) 36 / 66

Propensity Score Matching

Education (years)

Age

12 16 20 24 28

20

30

40

50

60

70

80

1

0

PropensityScore

CCC

C

CCC

C

CCC

CCCC

T

TTT

T

TT

T

T

T

T

TT

T

T

T

T

T

T

T

Gary King (Harvard, IQSS) 37 / 66

Propensity Score Matching

Education (years)

Age

12 16 20 24 28

20

30

40

50

60

70

80

C

C

CC

C

CC

C

C

C

C

C

C

C

C

C

C

CC

C TTTT

T

T

T

T

T

T

T

T

T

TT

T

T

T

T

T

1

0

PropensityScore

CCC

C

CCC

C

CCC

CCCC

T

TTT

T

TT

T

T

T

T

TT

T

T

T

T

T

T

T

Gary King (Harvard, IQSS) 38 / 66

Propensity Score Matching

Education (years)

Age

12 16 20 24 28

20

30

40

50

60

70

80

C

C

CC

C

CC

C

C

C

C

C

C

C

C

C

C

CC

C TTTT

T

T

T

T

T

T

T

T

T

TT

T

T

T

T

T

Gary King (Harvard, IQSS) 39 / 66

Method 3: Coarsened Exact Matching

1 Preprocess (Matching)Temporarily coarsen X as much as you’re willing

e.g., Education (grade school, high school, college, graduate)Easy to understand, or can be automated as for a histogram

Apply exact matching to the coarsened X , C (X )

Sort observations into strata, each with unique values of C(X )Prune any stratum with 0 treated or 0 control units

Pass on original (uncoarsened) units except those pruned

2 Estimation Difference in means or a model

Need to weight controls in each stratum to equal treatedsCan apply other matching methods within CEM strata (inherit CEM’sproperties)

Gary King (Harvard, IQSS) 40 / 66

Coarsened Exact Matching

Gary King (Harvard, IQSS) 41 / 66

Coarsened Exact Matching

Education

Age

12 14 16 18 20 22 24 26 28

20

30

40

50

60

70

80

CCC CCC CC

C CC C CCC CCCCCCC CC CCC CCCCCC

C CCC CC C

T TT T

TT TT T TTTTT TT

TTTT

Gary King (Harvard, IQSS) 42 / 66

Coarsened Exact Matching

Education

HS BA MA PhD 2nd PhD

Drinking age

Don't trust anyoneover 30

The Big 40

Senior Discounts

Retirement

Old

CCC CCC CC

C CC C CCC CCCCCCC CC CCC CCCCCC

C CCC CC C

T TT T

TT TT T TTTTT TT

TTTT

Gary King (Harvard, IQSS) 43 / 66

Coarsened Exact Matching

Education

HS BA MA PhD 2nd PhD

Drinking age

Don't trust anyoneover 30

The Big 40

Senior Discounts

Retirement

Old

CCC CCC CC

C CC C CCC CCCCCCC CC CCC CCCCCC

C CCC CC C

T TT T

TT TT T TTTTT TT

TTTT

Gary King (Harvard, IQSS) 44 / 66

Coarsened Exact Matching

Education

HS BA MA PhD 2nd PhD

Drinking age

Don't trust anyoneover 30

The Big 40

Senior Discounts

Retirement

Old

CC C

CC

CC C CCC CC CCCC

C

TTT T TT

TTT TT

TTTT

Gary King (Harvard, IQSS) 45 / 66

Coarsened Exact Matching

Education

HS BA MA PhD 2nd PhD

Drinking age

Don't trust anyoneover 30

The Big 40

Senior Discounts

Retirement

Old

CC C

CCCC C CC

C CC CCCC

C

TTT T TT

TTT TT

TTTT

Gary King (Harvard, IQSS) 46 / 66

Coarsened Exact Matching

Education

Age

12 14 16 18 20 22 24 26 28

20

30

40

50

60

70

80

CC C

CCCC C CC

C CC CCCC

C

TTT T TT

TTT TT

TTTT

Gary King (Harvard, IQSS) 47 / 66

The Bias-Variance Trade Off in Matching

Bias (& model dependence) = f (imbalance, importance, estimator) we measure imbalance instead

Variance = f (matched sample size, estimator) we measure matched sample size instead

Bias-Variance trade off Imbalance-n Trade Off

Measuring Imbalance

Classic measure: Difference of means (for each variable)Better measure: Difference of multivariate histograms,

L1(f , g ;H) =1

2

∑`1···`k∈H(X)

|f`1···`k− g`1···`k

|

Gary King (Harvard, IQSS) 48 / 66

Comparing Matching Methods

Standard approach

MDM & PSM: Choose matched n, match, check imbalanceCEM: Choose imbalance, match, check matched nBest practice: iterate (Ugh!)Choose matched solution & matching method becomes irrelevant

An alternative approach

Compute lots of matching solutions,Identify the frontier of lowest imbalance for each given n, andChoose a matching solution among those on the frontier

Gary King (Harvard, IQSS) 49 / 66

A Space Graph: Real DataKing, Nielsen, Coberley, Pope, and Wells (2011)

20000 15000 10000 5000 0

0.0

0.2

0.4

0.6

0.8

Healthways Data

N of Matched Sample ("variance")

L1 (

"bia

s")

● Raw DataRandom PruningPSMMDMCEM

Gary King (Harvard, IQSS) 50 / 66

A Space Graph: Real Data

15000 10000 5000 0

0.0

0.2

0.4

0.6

0.8

1.0

Called/Not Called Data

N of Matched Sample ("variance")

L1 (

"bia

s")

Gary King (Harvard, IQSS) 51 / 66

A Space Graph: Real Data

400 300 200 100 0

0.0

0.2

0.4

0.6

0.8

1.0

FDA Data

N of Matched Sample ("variance")

L1 (

"bia

s")

Gary King (Harvard, IQSS) 52 / 66

A Space Graph: Real Data

600 400 200 0

0.0

0.2

0.4

0.6

0.8

1.0

Lalonde Data Subset

N of Matched Sample ("variance")

L1 (

"bia

s")

Gary King (Harvard, IQSS) 53 / 66

Space Graphs: Different Imbalance Metrics

N of Matched Sample

L1

0.0

0.2

0.4

0.6

0.8

1.0

2500 2000 1500 1000 500 0

●●

Aid Shocks (L1 Metric)

● Raw DataRandom PruningCEMMDMPSM

published PSM

published PSM with 1/4 sd caliper

N of Matched Sample

Ave

rage

Diff

eren

ce in

Mea

ns

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

2500 2000 1500 1000 500 0

Aid Shocks (Difference in Means Metric)

published PSM

published PSM with 1/4 sd caliper

N of Matched Sample

Ave

rage

Mah

alan

obis

Dis

crep

ancy

010

2030

4050

60

2500 2000 1500 1000 500 0

● ●

Aid Shocks (Average Mahalanobis Discrepancy)

published PSM

published PSM with 1/4 sd caliper

Gary King (Harvard, IQSS) 54 / 66

PSM Approximates Random Matching in Balanced Data

Covariate 1

Cov

aria

te 2

−2 −1 0 1 2

−2

−1

0

1

2

−2 −1 0 1 2

−2

−1

0

1

2

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●

PSM MatchesCEM and MDM Matches

Gary King (Harvard, IQSS) 55 / 66

Destroying CEM with PSM’s Two Step Approach

Covariate 1

Cov

aria

te 2

−2 −1 0 1 2

−2

−1

0

1

2

−2 −1 0 1 2

−2

−1

0

1

2

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●●

●●

● ●

● ●

CEM MatchesCEM−generated PSM Matches

Gary King (Harvard, IQSS) 56 / 66

Pause for Conclusions

Propensity score matching:The problem:

Imbalance can be worse than original dataCan increase imbalance when removing the worst matchesApproximates random matching in well-balanced data(Random matching increases imbalance)

The Cause: unnecessary 1st stage dimension reductionImplications:

Balance checking requiredAdjusting for potentially irrelevant covariates with PSM is a mistakeAdjusting experimental data with PSM is a mistakeReestimating the propensity score after eliminating noncommonsupport may be a mistake

CEM and Mahalanobis do not have PSM’s problemsCEM > Mahalanobis > Propensity Score (in many data sets andsims)(Your performance may vary)You can easily check with the Space GraphCEM is the easiest and most powerful; let’s look more deeply. . .

Gary King (Harvard, IQSS) 57 / 66

Problems With Matching Methods (other than CEM)

Don’t eliminate extrapolation region

Don’t work with multiply imputed data

Most violate the congruence principle

Largest class of matching methods (EPBR, e.g., propensity scores,Mahalanobis distance): requires normal data (or DMPES); all X ’smust have same effect on Y ; Y must be a linear function of X ; aimsonly for expected (not in-sample) imbalance; in practice, we’relucky if mean imbalance is reduced

Not well designed for observational data:Least important (variance): matched n chosen ex anteMost important (bias): imbalance reduction checked ex post

Hard to use: Improving balance on 1 variable can reduce it on othersBest practice: choose n-match-check, tweak-match-check,tweak-match-check, tweak-match-check, · · ·Actual practice: choose n, match, publish, STOP.(Is balance even improved?)

Gary King (Harvard, IQSS) 58 / 66

CEM as an MIB Method

Coarsening determines the level of imbalance

Convenient monotonicity property: Reducing maximum imbalance onone X : no effect on others

We Prove: setting ε bounds the treated-control group difference,within strata and globally, for: means, variances, skewness,covariances, comoments, coskewness, co-kurtosis, quantiles, and fullmultivariate histogram.=⇒ Setting ε controls all multivariate treatment-control differences,interactions, and nonlinearities, up to the chosen level (matched n isdetermined ex post)

What if coarsening is set . . .too coarse? You’re left modeling remaining imbalancesnot coarse enough? n may be too smallas large as you’re comfortable with, but n is still too small? No magic method of matching can save you; You’re stuck modeling or collecting better data

Gary King (Harvard, IQSS) 59 / 66

End of planned slides for today; others follow

Gary King (Harvard, IQSS) 60 / 66

Other CEM properties we prove

Automatically eliminates extrapolation region (no separate step)

Bounds model dependence

Bounds causal effect estimation error

Meets the congruence principle

The principle: data space = analysis spaceEstimators that violate it are nonrobust and counterintuitiveCEM: εj is set using each variable’s unitsE.g., calipers (strata centered on each unit): would bin college drop outwith 1st year grad student; and not bin Bill Gates & Warren Buffett

Approximate invariance to measurement error:CEM pscore Mahalanobis Genetic

% Common Units 96.5 70.2 80.9 80.0

Fast and memory-efficient even for large n; can be fully automated

Simple to teach: coarsen, then exact match

Gary King (Harvard, IQSS) 61 / 66

Imbalance Measures

Variable-by-Variable Difference in Global Means

I(j)1 =

∣∣∣X (j)mT − X

(j)mC

∣∣∣ , j = 1, . . . , k

Multivariate Imbalance: difference in histograms (bins fixed ex ante)

L1(f , g) =∑

`1···`k

|f`1···`k− g`1···`k

|

Local Imbalance by Variable (given strata fixed by matching method)

I(j)2 =

1

S

S∑s=1

∣∣∣X (j)ms

T− X

(j)ms

C

∣∣∣ , j = 1, . . . , k

Gary King (Harvard, IQSS) 62 / 66

CEM in Practice: EPBR-Compliant Data

Monte Carlo: XT ∼ N5(0,Σ) and XC ∼ N5(1,Σ). n = 2, 000, reps=5,000Allow MAH & PSC to match with replacement; use automated CEM

Difference in means (I1):

X1 X2 X3 X4 X5 Seconds

initial 1.00 1.00 1.00 1.00 1.00MAH .20 .20 .20 .20 .20 .28PSC .11 .06 .03 .06 .03 .16CEM .04 .02 .06 .06 .04 .08

Local (I2) and multivariate L1 imbalance:

X1 X2 X3 X4 X5 L1

initial 1.24PSC 2.38 1.25 .74 1.25 .74 1.18

MAH .56 .36 .29 .36 .29 1.13CEM .42 .26 .17 .22 .19 .78

CEM dominates EPBR-methods in EPBR DataGary King (Harvard, IQSS) 63 / 66

CEM in Practice: Non-EPBR Data

Monte Carlo: Exact replication of Diamond and Sekhon (2005), using datafrom Dehejia and Wahba (1999). CEM coarsening automated.

BIAS SD RMSE Seconds L1

initial −423.7 1566.5 1622.6 .00 1.28MAH 784.8 737.9 1077.2 .03 1.08PSC 260.5 1025.8 1058.4 .02 1.23GEN 78.3 499.5 505.6 27.38 1.12CEM .8 111.4 111.4 .03 .76

CEM works well in non-EPBR data too

Gary King (Harvard, IQSS) 64 / 66

CEM Extensions I

CEM and Multiple Imputation for Missing Data1 put missing observation in stratum where plurality of imputations fall2 pass on uncoarsened imputations to analysis stage3 Use the usual MI combining rules to analyze

Multicategory Treatments: No modification necessary; keep all stratawith ≥ 1 unit having each value of T (L1 is max difference acrosstreatment groups)

Continuous Treatments: Coarsen treatment and apply CEM as usual

Blocking in Randomized Experiments: no modification needed;randomly assign T within CEM strata

Automating user choices Histogram bin size calculations, EstimatedSATT error bound, Progressive Coarsening

Detecting Extreme Counterfactuals

Gary King (Harvard, IQSS) 65 / 66

For papers, software (for R and Stata), tutorials, etc.

http://GKing.Harvard.edu/cem

Gary King (Harvard, IQSS) 66 / 66

top related