eval 6000: foundations of evaluation final lecture!

32
EVAL 6000: Foundations of Evaluation Final lecture!

Upload: cory-rice

Post on 24-Dec-2015

230 views

Category:

Documents


0 download

TRANSCRIPT

EVAL 6000: Foundations of Evaluation

Final lecture!

(Semi) In-Depth Examination of Five Evaluation Approaches

• Utilization-focused evaluation• Participatory evaluation• Theory-driven/theory-based evaluation• CIPP model for evaluation• Consumer-oriented evaluation (Scriven’s

Key Evaluation Checklist approach)

To facilitate a clearer understanding of these evaluation approaches, we will use Heifer Project International (HPI) as a case example to provide context and to discuss how these approaches might be applied in practice

Heifer Project International (HPI)

• Aim is to reduce poverty, hunger, and social inequities through strategies aimed at creating self-reliance rather than providing short-term relief

• “Passing on the gift” is one of the unique attributes that sets apart Heifer from other international development initiatives

Goals

Food & Income Security

Resource Sharing (POG)

Environmental Protection

Education & Empowerment

Policy, Practice, &

System Change

Relationships Fostering

Cornerstones

Values

Basic Needs

Livestock Care &

Management

Environment Care &

Management

Education

Empowerment

System & Policy

Improvement

Cornerstones

Passing on the Gift

Accountability

Sharing & Caring

Sustainability & Self-

Reliance

Improved Animal

Management

Nutrition & Income

Gender & Family Focus

Genuine Need & Justice

Improved Environment

Full Participation

Training & Education

Spirituality

Indicators

Food Security

Improved Environment

Income

Gender Equity

Organizing and Action for Social Change

Strengthening Communities

Policy Change

• Evaluation done for and with specific intended primary users for specific, intended uses

• Premised on the assertion that evaluations should be judged by their utility and actual use

Utilization-Focused Evaluation (UFE)

• Evaluator is charged with giving careful consideration to how everything that is done, from beginning to end, will affect use

• Is personal and situational, with strong emphasis on the “personal factor”

Utilization-Focused Evaluation (UFE)

• Does not give primacy to any specific method, model, approach, or ideological orientation (with the exception of an emphasis on use)

• Does emphasize The Program Evaluation Standards as a basis for accountability and quality assurance

Utilization-Focused Evaluation (UFE)

• Advance organizers– What decisions, if any, are the evaluation findings

expected to influence?– When will decisions be made? By whom? When,

then, must the evaluation findings be presented to be timely and influential?

– What is at stake in the decisions? For whom? What controversies or issues surround the decision?

– What is the history and context of the decision-making process?

– What other factors (values, politics, personalities, promises already made) will affect the decision making?

Utilization-Focused Evaluation (UFE)

• Advance organizers, continued– How much influence do you expect the

evaluation to have—realistically?– To what extent has the outcome of the

decision already been determined?– What data and findings are needed to support

decision making?– What needs to be done to achieve that level of

influence?– How will we know afterward if the evaluation

was used as intended?

Utilization-Focused Evaluation (UFE)

Participatory Evaluation

• An extension of the more restrictive stakeholder-based approach (with elements of UFE)

• Emphasis on increasing use through participation

• Includes aspects of organizational learning and capacity building through stakeholder participation

Participatory Evaluation

• Evaluator is a coordinator and responsible for technical support, training, and quality control

• Ultimately, the evaluator works collaboratively/in partnership with a select group of intended users

Participatory Evaluation

• Two primary forms– Practical participatory evaluation (PPE)

• Utilization-oriented (with an emphasis on formative evaluation)

– Transformative participatory evaluation (TPE)• Democratic, emancipatory, empowerment-

oriented

Participatory Evaluation

• Who controls?– Technical decision making (evaluator vs.

stakeholder)

• Stakeholder selection for participation?– Stakeholders selected for participation (diverse

vs. limited)

• How deep?– Stakeholder participation (involved in all

aspects of inquiry vs. involved as a source for consultation)

Original dimensions of PPE

Modified dimensions of PPE (Cullen, 2010)

• Any evaluation strategy or approach that explicitly integrates and uses stakeholder, social science, some combination of, or other types of theories in conceptualizing, designing, conducting, interpreting, and applying an evaluation

Theory-Driven/Based Evaluation

• Sometimes referred to as program-theory evaluation, theory-based evaluation, theory-guided evaluation, theory-of-action, theory-of-change, program logic, logical frameworks, outcomes hierarchies, realist or realistic evaluation, and, program theory-driven evaluation science

Theory-Driven/Based Evaluation

• All, in some form or another, aim to determine how, why, when, and for whom a program works and under what conditions (i.e., causal explanation)

Theory-Driven/Based Evaluation

InputsInitial

OutcomesActivties Outputs

IntermediateOutcomes

Long-TermOutcomes

Program Process Theory Program Impact Theory

Action Model

Change Model

Program Implementation

Intervention Determinants Outcomes

ImplementingOrganizations

Implementors

Associate Organizations and

Community Partners

EcologicalContext

Intervention and Service Delivery

Protocols

Target Populations

EnvironmentResources

Core Principles and Subprinciples of Theory-Driven Evaluation

1. Theory-driven evaluations/evaluators should formulate a plausible program theory

a. Formulate program theory from existing theory and research (e.g., social science theory)

b. Formulate program theory from implicit theory (e.g., stakeholder theory)c. Formulate program theory from observation of the program in operation/exploratory research (e.g., emergent

theory)d. Formulate program theory from a combination of any of the above (i.e., mixed/integrated theory)

2. Theory-driven evaluations/evaluators should formulate and prioritize evaluation questions around a program theorya. Formulate evaluation questions around program theory

b. Prioritize evaluation questions

3. Program theory should be used to guide planning, design, and execution of the evaluation under consideration of relevant contingencies

a. Design, plan, and conduct evaluation around a plausible program theory

b. Design, plan, and conduct evaluation considering relevant contingencies (e.g., time, budget, use)

c. Determine whether evaluation is to be tailored (i.e., only part of the program theory) or comprehensive

4. Theory-driven evaluations/evaluators should measure constructs postulated in program theory

a. Measure process constructs postulated in program theory

b. Measure outcome constructs postulated in program theory

c. Measure contextual constructs postulated in program theory

5. Theory-driven evaluations/evaluators should identify breakdowns, side effects, determine program effectiveness (or efficacy), and explain cause-and-effect associations between theoretical constructs

a. Identify breakdowns, if they exist (e.g., poor implementation, unsuitable context, theory failure)

b. Identify anticipated (and unanticipated), unintended outcomes (both positive and negative) not postulated by program theory

c. Describe cause-and-effect associations between theoretical constructs (i.e., causal description)

d. Explain cause-and-effect associations between theoretical constructs (i.e., causal explanation)i. Explain differences in direction and/or strength of relationship between program and outcomes attributable to

moderating factors/variables

ii. Explain the extent to which one construct (e.g., intermediate outcome) accounts for/mediates the relationship between other constructs

CIPP Model for Evaluation

• The model’s core concepts are denoted by the acronym CIPP, which stands for evaluations of an entity’s context, inputs, processes, and products

• Generally targeted toward program managers and other decision makers

http://www.wmich.edu/evalctr/archive_checklists/cippchecklist_mar07.pdf

CIPP Model for Evaluation

• Context evaluations are applied to assess needs, problems, assets, and opportunities, plus relevant contextual conditions and dynamics to help decision makers define goals and priorities and to help the broader group of users judge goals, priorities, and outcomes

• Input evaluations serve program planning by helping identify and then assess alternative approaches, competing action plans, staffing plans, and budgets for their feasibility and potential cost-effectiveness to meet targeted needs and achieve defined goals

CIPP Model for Evaluation

• Process evaluations are used to assess the implementation of plans to help staff carry out activities and later to help the broad group of users judge program implementation and expenditures and also interpret outcomes

• Product evaluations are used to identify and assess costs and outcomes (intended and unintended, short-term and long-term) and may be divided into assessments of impact, effectiveness, sustainability, and transportability

The Relevance of Four Evaluation Types to Formative and Summative Evaluation Roles

Evaluation Roles Context Input Process Product

Formative Evaluation: Prospectiveapplication of CIPP information to assist decision making and quality assurance.

Guidance for determining areas for improvement and for choosing and ranking goals (based on assessing needs, problems, assets, andopportunities, plus contextual dynamics).

Guidance for choosing a program strategy (based on identifying and assessing alternative strategies and resource allocation plans). Examination of the work plan.

Guidance for implementing the operational plan (based on monitoring and judging activities and delivering periodic evaluative feedback).

Guidance for continuing,modifying,adopting, orterminating the effort (based on assessing outcomes and side effects).

Summative evaluation: Retrospective use of CIPP information to sum up the effort’s merit, worth, probity, equity, feasibility, efficiency, safety, cost, and significance.   

Comparison of goals and priorities to assessed needs, problems, assets, opportunities, and relevant contextual dynamics.

Comparison of the program’s strategy, design, and budget to those of critical competitors and to goals and targeted needs of beneficiaries.

Full description of the actual process and record of costs. Comparison of the designed and actual processes and costs.

Comparison of outcomes and side effects to goals and targeted needs and, as feasible, to results of competitive programs. Interpretation of results against the effort’s assessed context, inputs, and processes.

Consumer-Oriented Evaluation

• Predicated on “values” and “valuing”• Values (aka, criteria and standards)

brought to bear are derived from multiple sources (e.g., definitional, needs of impacted population, legal, ethical, functional/logical)

• Targeted toward those affected by programs (i.e., consumers)

http://www.wmich.edu/evalctr/archive_checklists/kec_feb07.pdf

Consumer-Oriented Evaluation

• Requires evaluators to investigate values in terms of process, outcomes, costs, comparisons, and generalizability under the “Subevaluations” checkpoints in the Key Evaluation Checklist (KEC)

• Explicit integration of empirical “facts” with values (i.e., the fact-value synthesis) as well as the integration of multiple values (i.e., the value synthesis)

Consumer-Oriented Evaluation

• Organized around 15 checkpointsA. Preliminaries

I. Executive summaryII. PrefaceIII. Methodology

B. Foundations1. Background and context2. Descriptions and definitions3. Consumers (impactees)4. Resources (a.k.a., “strengths assessment”)5. Values

Consumer-Oriented Evaluation

• Checkpoints, continuedC. Subevaluations

6. Process7. Outcomes8. Costs9. Comparisons10. Generalizability

D. Conclusions & Implications11. Synthesis12. (possible) Recommendations & Explanations13. (possible) Responsibility & Justification14. Report & Support15. Metaevaluation