program evaluation regional workshop on the monitoring and evaluation of hiv/aids programs february...

27
Program Evaluation Regional Workshop on the Monitoring and Evaluation of HIV/AIDS Programs February 14 – 24, 2011 New Delhi, India

Upload: beverly-butler

Post on 16-Dec-2015

215 views

Category:

Documents


1 download

TRANSCRIPT

Program Evaluation

Regional Workshop on the Monitoring and Evaluation of HIV/AIDS

ProgramsFebruary 14 – 24, 2011

New Delhi, India

Objectives of the Session

By the end of this session, participants will be able to:

Understand the purpose and role of program evaluation

Distinguish between different evaluation types and approaches

Link evaluation designs to the types of decisions that need to be made

Why Evaluate HIV/AIDS Programs?

To improve the design an implementation of a program

To reach informed decisions on the allocation of existing limited resources, thereby increasing program performance and effectiveness

To identify factors that influence health and social outcomes

To generate knowledge, to know what works and what does not.

Good evaluations are public goods

Current Challenges in Evaluating HIV Preventon Programmes HIV prevention programmes are increasingly

complex, multi-component and context-specific

The underlying behavioural theories leading to multiple behaviour changes and ultimately impact are difficult to assess;

Many projects/interventions/services aim to affect HIV risk factors and/or vulnerabilities rather than averting HIV infections directly.

Source: Strategic Guidance for Evaluating HIV Prevention Programmes. UNAIDS 2010

All Programs/Projects have (implicitly or explicitly): Objectives Expected outcomes Target population

Mechanism(s) to deliver services (the intervention)

Criteria for participating in the program

A conceptual framework that provides rationale for program existence (sometimes called the Development Hypothesis)

Monitoring vs. Evaluation

Objectives of Monitoring:

To provide information on the functioning of the program:

a) Is it progressing according to plan?

b) Identify problems for correction

To track key program elements over time (to assess changes)

Characteristics of Monitoring:

•Mostly tracks key quantifiable indicators of key program elements:• inputs, processes, outputs, and outcomes

•Often done on a routine basis

•Key issue: good measurement using relevant indicators

•No assessment of what is the cause of the change in the indicators

Objectives of Evaluation:

- To determine whether a program achieved its objectives

- To determine the impact of the program on the outcome

intended by the program

- How much of the observed change in the outcome can be

attributed to the program and not to other factors?

Characteristics of Evaluation:

- Key issues: causality, quantification of program effect

- Use of evaluation designs to examine whether an observed change in outcome can be attributed to the program

Note: Monitoring tells you that a change occurred; Impact Evaluation will tell you whether it was due to the program

Monitoring vs. Evaluation

Source: Strategic Guidance for Evaluating HIV Prevention Programmes. UNAIDS 2010

Deciding Upon An Appropriate Evaluation Design

Indicators: What do you want to measure?

Provision

Utilization

Coverage

Impact

Type of inference: How sure to you want to be?

Adequacy

Plausibility

Probability

Other factors

Source: Habicht, Victora and Vaughan (1999)

Clarification of Terms

Provision Are the services available?

Are they accessible?

Is their quality adequate?

Utilization Are the services being used?

Coverage Is the target population being reached?

Impact Were there improvements in disease patterns or health-related behaviors?

Clarification of Terms

Adequacy assessment

Did the expected changes occur?

Are objectives being met?

Plausibility assessment

Did the program seem to have an effect above and beyond other external influences?

Probability assessment

Did the program have an effect (P < x%)

Source: Habicht, Victoria and Vaughan (1999)

Adequacy Assessment Inferences Are objectives being met?

Compares program performance with previously established adequacy criteria, e.g. 80% ORT use rate

No control group

2+ measurements to assess adequacy of change over time

Provision, utilization, coverage

Are activities being performed as planned?

Impact

Are observed changes in health or behavior of expected direction and magnitude?

Cross-sectional or longitudinal

Source: Habicht, Victora and Vaughan (1999)

Plausibility Assessment Inferences (1) Program appears to have effect above & beyond impact of

non-program influences

Includes control group Historical control group

Compares changes in community before & after program and attempts to rule out external factors

Internal control group Compares 3+ groups/individuals with different intensities of exposure to program

(dose-response)

Compares previous exposure to program between individuals with and without the disease (case-control)

External control group Compares communities/geographic areas with and without the program

Source: Habicht, Victora and Vaughan (1999)

Plausibility Assessment Inferences (II)

Provision, utilization, coverage Intervention group appears to have better performance than

control

Cross-sectional, longitudinal, longitudinal-control

Impact Changes in health/behavior appear to be more beneficial in

intervention than control group

Cross-sectional, longitudinal, longitudinal-control, case-control

Source: Habicht, Victora and Vaughan (1999)

Probability Assessment Inferences

There is only a small probability that the differences between program and control areas were due to chance (P < .05)

Requires control group

Requires randomization

Often not feasible for assessing program effectiveness Randomization needed before program starts

Political factors

Scale-up

Inability to generalize results

Known efficacy of intervention

Source: Habicht, Victora and Vaughan (1999)Source: Habicht, Victoria and Vaughan (1999)

Evaluation Flow from Simpler to More Complex Designs

Type of evaluation Provision Utilization Coverage Impact

Adequacy 1st 2nd 3rd 4th (b)

Plausibility 4th (a) 5th

Probability

Source: Habicht, Victoria and Vaughan (1999)

Possible Areas of Concern to Different Decision Makers

Type of evaluation Provision Utilization Coverage Impact

Adequacy Health center manager

International Agencies

District health managers

International Agencies

Plausibility International Agencies

Donor agencies

Scientists

Probability Donor Agencies & Scientists

Source: Habicht, Victora and Vaughan (1999)

Process Evaluations Assess whether the program was implemented as

intended

May look at Access to services

Reach and coverage of services

Quality of services

Client satisfaction

May also provide an understanding of cultural, socio-political, legal and economic contexts that affect implementation of the programme.

Outcome/Impact Evaluations

Assess whether changes in outcome/impacts are due to the program.

May look at

Outcomes such as HIV-related behaviors,

Health impacts such as HIV status, life expectancy

Programstart

Programmidpoint or end

Time

Outcome

The EvaluationQuestion:How much ofthis change is due to theprogram?

Evaluating Program Impact

Withprogram

Withoutprogram

Programstart

Programmidpoint or end

Time

Outcome

Evaluating Program Impact

Net Program Impact

Features of Different Study Designs

True experiment Quasi-experiment Non-experimentalPartial coverage/ new programs

Control group

Strongest design

Most expensive

Partial coverage/ new programs

Comparison group

Weaker than experimental design

Less expensive

Full coverage programs

--

Weakest design

Least expensive

Readiness criteria for Outcome & Impact Evaluation The program

is implemented with sufficient quality

has achieved adequate coverage

is of long enough duration that expected change in the specified outcomes for the evaluation has had time to occur

When to use an experimental or quasi-experimental design

The program has unknown effectiveness

There is the potential for negative effects

The program is politically or otherwise risky

Source: Strategic Guidance for Evaluating HIV Prevention Programmes. UNAIDS 2010

Who should plan for Evaluation?

All programs should conduct basic monitoring

Most programs should conduct process evaluations Implementation assessments

Quality assessments

Coverage assessments

Some programs should conduct outcome evaluation when evidence is needed as to whether the program is effective

References Adamchak S et al. (2000). A Guide to Monitoring and Evaluating

Adolescent Reproductive Health Programs. Focus on Young Adults, Tool Series 5. Washington D.C.: Focus on Young Adults.

Fisher A et al. (2002). Designing HIV/AIDS Intervention Studies. An Operations Research Handbook. New York: The Population Council.

Habicht JP et al. (1999) Evaluation designs for adequacy, plausibility and probability of public health programme performance and impact. International Journal of Epidemiology, 28: 10-18.

Rossi P et al. (1999). Evaluation. A systematic Approach. Thousand Oaks: Sage Publications.

UNAIDS (2010). Strategic Guidance for Evaluating HIV Prevention Programmes.

MEASURE Evaluation is a MEASURE project funded by the

U.S. Agency for International Development and implemented by

the Carolina Population Center at the University of North Carolina

at Chapel Hill in partnership with Futures Group International,

ICF Macro, John Snow, Inc., Management Sciences for Health,

and Tulane University. Views expressed in this presentation do not

necessarily reflect the views of USAID or the U.S. Government.

MEASURE Evaluation is the USAID Global Health Bureau's

primary vehicle for supporting improvements in monitoring and

evaluation in population, health and nutrition worldwide.