formative evaluation prof. pgm mujinja, phd sphss, muhas october, 2013

27
Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Upload: bertram-caldwell

Post on 16-Jan-2016

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Formative Evaluation

Prof. PGM Mujinja, PhDSPHSS, MUHASOctober, 2013

Page 2: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Types of Evaluation

• Formative• Process• Summative

• What is the difference– “When the cook tastes the soup, that’s formative;

when the guests taste the soup, that’s summative." Scriven, (1996)

Page 3: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Primary Types of Evaluation• Formative

– Needs assessment/diagnostic– Pretesting (communication programs)

• Process– Monitoring implementation– Special studies on quality, access, reach

• Summative– Monitoring of service utilization– Monitoring of behavior or health status– Impact assessment– Cost effectiveness

Page 4: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Formative Evaluation__ Definitions

• starts during a project’s development stages and uses theory to develop and plan the project’s components, development and pilot testing.

• It informs the direction a project will take.• Pre-testing is a type of formative evaluation and

involves trying out some of a project’s parts before it is launched in full. – It assesses a project’s relevance to identified health

problems, and the practicality of different intervention Methods

Page 5: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Formative Evaluation__ Definitions• Formative evaluation is typically conducted

during the development or improvement of a program or product (or person, and so on) and it is conducted, often more than once, for in-house staff of the program with the intent to improve

• The reports normally remain in-house; but serious formative evaluation may be done by an internal or an external evaluator or preferably, a combination; of course, many program staff are, in an informal sense, constantly doing formative evaluation

Page 6: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

What do different evaluation types look at?• A formative evaluation might identify features

from a literature search that support a particular approach. For example,– motivational interviewing, combined with

reinforcement and follow-up. This information can be used to develop the best possible project for this setting.

– A pilot study with a small sample of patients could test the evaluation and this lead to refinements to the project.

• For example, patients from the pilot study of a nutritional project might say they prefer to have detailed discussions about their diet with a dietician rather than a a physician. This could lead to changes to the project’s structure or format before it is launched in full.

Page 7: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Why Formative evaluation?• Guides the design of a program• Different types:

– Needs assessment – Baseline /Diagnostic (formative) research

• The purpose of formative evaluation is to validate or ensure that the goals of the intervention are being achieved and to improve the programme, if necessary, by means of identification and subsequent remediation of problematic aspects.

• Formative evaluation is conducted to provide program staff evaluative information useful in improving the program– "is research-oriented vs. action-oriented" – "evaluations are intended - by the evaluator - as a basis for

improvement" – "the summative vs. formative distinction is context dependent"

Page 8: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Formative evaluation---

• Helps to minimize failures in:– misconception– Project Theory– Implementation theory

Page 9: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Formative evaluation: conceptual framework• Program theory• Articulates the pathways by which an intervention is expected to

cause the desired outcomes• Provides evaluator with specific elements to assess• Other names:

– Logic model, program model, outcome line, cause map, action theory • Often draw on:

– One or more theories– Empirical evidence– Knowledge specific to the particular case

• Serve to:– Summarize and integrate knowledge– Provide explanations for causal linkages– Generate hypotheses

Page 10: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Building a conceptual model

• Start with the endpoint (dependent variable,

outcome, or target point for intervention)

• Identify potential correlates, based on empirical

or theoretical evidence

• Show antecedent or mediating variables by

proximity to dependent variables

Page 11: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Conventions for drawing a conceptual model

• Only include concepts that will be

operationally defined and measured

• Present left-to-right or top-to-bottom

• Use arrows to imply causality

• Label concepts succinctly

• Do not include operational definitions or

values of variables in the model

Page 12: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Conceptual Framework of Family Planning Demand and Program Impact on Fertility

Page 13: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Formative Evaluation---Steps in conducting a needs

• Become familiar with political context• Identify users and uses• Identify target pop. (geographic, socio-dem)• Inventory existing services (what gaps exist?)• Identify needs• Prepare document

– Evidence, benchmarks, conclusions• Communicate findings, implement

Page 14: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Developing evaluation capacity through a participatory approach

• Adopting a participatory approach will increase the capacity of those undertaking an evaluation or potentially affected by it, and also help manage expectations about the evaluation and increase the uptake of any resulting recommendations.

• In the context of the framework, the creation of a participatory environment is not just about involving the community, local government or government agencies during the gathering of information; it is also about ensuring that the staff or organisation associated with managing or delivering the program has ownership of the process too.

• Participation by partners in an evaluation should not just involve them at the point of gathering information, but at every stage of the process. This might include: – scoping for the evaluation, including understanding of the project issues – evaluation design – information needs, monitoring and data management – evaluation of findings against evaluation questions – reporting or communicating evaluation findings and negotiating any program

changes.

Page 15: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

…..Participatory evaluation• Some things to consider when developing a

participatory approach to evaluation include: – How much participation is relevant for each partner? This

will depend on the role of the stakeholder in the program and the reason for involving them.

– Do not force stakeholders to participate as a single group if it diminishes what is trying to be achieved or the motivation for each group to participate.

– Ensure that participation is worthwhile for the partners – often partners will become and remain involved if they see some benefit from the investment of their time and resources.

– Ensure that the participation is relevant to the evaluation.

Page 16: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

….Participatory evaluation• The following questions should be considered when looking

to engage stakeholders in the evaluation process: – When is participation important for evaluation and who are the

most relevant stakeholders to involve at each of the stages? – Who is going to use the evaluation findings and for what

purpose, e.g. accountability, program improvement or high-level decision-making?

– Will the analysis of information against evaluation questions require specialist skills or a broad range of skills?

– Are the roles and responsibilities for evaluation clear or is a stakeholder’s help needed in defining those roles?

– Will there be a requirement for capacity building with stakeholders in advance of, or during, the evaluation process?

Page 17: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

….Participatory evaluation

• What would attract stakeholders to participate?– perceived benefits – relevance to the priorities of the partners – quick and relevant feedback to participants – capacity of the program to act on issues or

recommendations that arise from the partners – capabilities, leadership and maturity of the group – willingness of groups to be open or acknowledge

ethical issues as part of a two-way process of trust.

Page 18: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Formative Evaluation--- Other Issues to consider

• Compare current levels and types of services to

benchmark or reference points

– Conception of human needs

– Moral/ethical values (“no child left behind”)

– Levels of service provided elsewhere

– Service provider opinions/preference

– Client (current, prospective) opinions

Page 19: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Formative evaluation : quantitative & qualitative

• Quantitative (demographic, epidemiological):

– To quantify the extent of the problem

– To identify subgroups most affected

– To identify and explain determinants

• Qualitative:

– To understand problem from user perspective,

identify barriers

Page 20: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Information requirements

• Evaluations uses both existing and new information in their assessment of a project or program.

• The evaluation context should determine the evaluation type (or combination of types) required.

• From this information and using the design criteria, an evaluation team can start to define the methods needed.

• It is most likely that a combination of methods will mean a mix of qualitative and quantitative information will be analysed. – This is often referred to as a ‘mixed method or triangulation’

approach to evaluation and a combination of both sets of information may lead to a richer base of evaluative information.

Page 21: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Potential data sources• Catchment target performance measures • Published literature • Existing catchment reports • Programme annual reports and annual implementation plans• Scenario modelling (if already exists) • Project reports or diaries, etc.• Case studies• Surveys or interviews • Externalities --- Externalities Activities that are outside the

control of the programme but may impact (positively or negatively) on the ability to achieve the management and catchment targets.

••

Page 22: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Sources of data: primary and secondary

• Literature reviews• Similar studies• Demographic statistics • Government reports• Surveys (mail, phone, in person)• Focus groups• Interviews• Direct observation

Page 23: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Data collection

• Quantitative– Survey among intended audience– Often uses quota samples

• Qualitative– Focus groups– In-depth interviews– Case studies– Ethnographic

Page 24: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Levels of measurement

• Population-based:– Data collected from (ideally, a representative

sample) of the target population– Measures coverage (outcomes among the general

public)

• Program-based:– Data collected from clients or participants

exposed to the program

Page 25: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Sample size: • Depends on the instrument

– suggestions

• Interviews: 20 or more per target group

– (Men/women, urban/rural, young/older adult)• Focus groups

– Two groups per “category" of intended audience • In-depth interviews

– 5 per subgroup

Page 26: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Data processing and reporting

• QUALITATIVE– If possible, tape the sessions– Transcribe tapes or use notes– Identify main themes of interest– Match comments to themes– Identify trends

• QUANTITATIVE– Code the data– Tabulate the data– Analyze and present the results– Computerized vs. manual

Page 27: Formative Evaluation Prof. PGM Mujinja, PhD SPHSS, MUHAS October, 2013

Analysis and reporting• It is critical that the data are analysed at key stages in the project. This

may simply be at the project’s completion or it may be at agreed times throughout the project to allow for interim feedback.

• Even basic monitoring of data can be extremely useful if fed into a project’s development. For example, the monthly analysis of attendance at a VCT centre may show changing patterns in attendance due to an external factor that can be easily rectified.

• The type of analysis needs to be closely related to the study design and the appropriate statistical tests chosen (for quantitative analysis) or the appropriate analytical analysis method (for qualitative studies).

• The style of reporting should also be related to the readers’ needs and the evaluation’s aims. For example, a manuscript for an academic journal reporting on the results of a controlled trial will have a very different style to a report to a project board on participants’ views. The same evaluation data may be used for different purposes, providing reports are suitably adapted.