models in evaluation

70

Upload: adrienne-veronique-reyes

Post on 05-Jan-2016

522 views

Category:

Documents


62 download

DESCRIPTION

This presentation briefly discusses 6 types of research evaluation models

TRANSCRIPT

SYNOPSIS OF THE SEMINAR WORKSHOP

– Definitions , context, objectives and functions– Evaluation Designs – Characteristics– Types of Evaluation Models

ABCD ModelCIPP ModelProvus – Discrepancy ModelKerrigan ModelGoal – Based Evaluation (GBE)Goal – Free Evaluation (GFE)

Evaluation Designs:1. Seeks to clarify planning alternatives and

to improve program2. Loyalty to a particular program or

project; choice of evaluation topics is determined by the information needs of decision makers.

Evaluation Designs:3. Methodological choices are scientific but non-

experimental & quite often naturalistic4. Time-Frame for the production of results is set by

the program5. Professional rewards consist in the utilization of

findings by decision makers & demonstrated improvement in the program implementation.

MODELS of Research

Evaluation1. ABCD – Jesus Ochave2. CIPP – Daniel Stufflebeam3. Provus – Discrepancy –

Malcolm Provus4. Kerrigan Evaluation – John

Kerrigan5. Goal – Based Evaluation –

Tyler6. Goal – Free Evaluation –

Michael Scriven

D

By: Dr. JESUS OCHAVE

BackgroundIt is an evaluation model developed by Jesus

Ochave(1994)Is comparable with classical models of evaluation

like those of Stufflebeam’s CIPP Model and Provus Discrepancy Model

Comprehensive enough to allow the evaluator to evaluate aspects of the program which bears with the effectiveness

Flexible and allows the evaluator to trace the causal factors that explain program effects without necessarily going through a mass of other data

Four Major Components of ABCD Model

A- the respondents/subjects/clienteleB- the program/ operationsC- the effectsD- the social impact

Each of these components has 2 dimensions, the intent/plans and the actualities/ observations

STEPSIdentify who the respondents areProvide a program/operation that will suit

best or answer the need of the respondentsCheck if the program/operation provided has

a better effects to the respondents’ cognitive or affective domain capabilities

Verify whether the operation provides makes the respondents useful members of the community

The ModelSpherical structure

-symbolizes the flexibility of the components which will actually depend on the kind of program being evaluated

Polygon with many sides-represents the many faces or areas of concern of the program

Vectors/arrows

-represent the direction of the effects-components A affects components B, C and D. This means that the program has to be oriented according to the type of clientele and the set of learning experiences must conform with the objectives of the program to have at least and impact on the clientele

Broken line arrow-from the component A to component C represents an actuality of the effects on students. No matter what, the effects will vary depending on the actual students/clientele.-The discrepancy between standards (solid lines) and the actual performance (broken lines) is indicated by the gap in the two boxes.

The bigger the magnitude of the box between the intents and actualities, the less positive the evaluation.

The lesser the gap, the more positive the better.

-The gaps explain the nature of the effects and impact. The data on discrepancies between the intents and management of educational program.

The 2 lines (solid & broken lines) approximating each other would indicate an outstanding or excellent dimension which means that “what should be” is “what exists”.

• However, if the gap between the two boxes is big, then the discrepancy between the “what is” and “what should be” is also big. The intended effects are stated objectives or goals of the program and the social impact as intended are visions of the program.

BOTIKA NG BARANGAY

Implementation of the Program / Operation

Clients•Acceptance•Utilization•Satisfaction

Effects:Effects:SavingsSavingsHealth - Health - RelatedRelatedBenefitsBenefits

Social ImpactMortality Rate

CIPP MODELDaniel Stufflebeam

• A decision – focused approach to evaluation– Emphasizes the systematic provision of

information for program management– Makes evaluation directly relevant to the needs

of decision makers during the different phases and activities of a program

– Provides useful information to decision-makers with the overall goal of program or project improvement

CIPP ModelDaniel Stufflebeam

Context

•Vision•Mission•Philosophy•Objectives

Input

•Faculty•Curricular Facilities•Library

Process

•Monitoring Instruction•Management Student Activities•Student Services

Product

•Student•Performance•Cognitive•Affective•Psychomotor

Aspects ofEvaluation

Type of decision Kind of question answered

Context Evaluation Planning decisions What should we do?

Input evaluation Structuring decisions How should we do it?

Process evaluation Implementing decisions Are we doing it as planned?

Product evaluation Recycling decisions Did it work?

Aspects ofEvaluation

Type of decision Kind of question answered

Context Evaluation

Planning decisions What should we do?

Input evaluation Structuring decisions

How should we do it?

Process evaluation Implementing decisions

Are we doing it as planned?

Product evaluation

Recycling decisions

Did it work?

Statement of the Problem:

The main purpose of the study is to make an evaluation of the Community District Hospital Nursing Service, from January 2007-2009.

The findings serve as the baseline data for the envisioned hospital accreditation and expansion purposes.

Statement of the Problem:

• Specifically it seeks to answer the following questions:1. What is the existing status of

Community District Hospital Nursing Service in terms of the following variables as perceived by the Nursing Service personnel?1.1 Context Variable

1.1.1 Philosophy and Objectives1.1.2 Organization1.1.3Policies

Statement of the Problem: 1.2 Input Variables

1.2.1 Resource Management1.2.2 Material Management1.2.3 Financial Management

1.3 Process Variables1.3.1 Patient Care

Management1.3.2 Reporting / Recording1.3.3 Interdepartmental

Relations1.4 Product Variable

1.4.1 Quality Nursing Service

Six Essential Steps in Evaluation

1. Develop a list of standards which specify the characteristics of ideal implementation of a learning program.2. Determine the information

required to compare actual implementation with define standards.

3. Design methods to obtain the required information.

4. Identify the discrepancies between the standards and the actual learning program.

5. Determine reasons for the discrepancies

6. Eliminate discrepancies by making changed to implementation of the

learning program.

Four Specific Stages.I - Program Definition Assessment of

the program design by defining the , necessary inputs processes and

outputs and then by evaluating the comprehensiveness and internal

.consistency of the design.II – Program Installation Assessment

of the degree program installation 1against stage

Four Specific Stages

.III – Program Process Assessment of the relationship between the variables to be change and

process use to effect the.change

.IV – Program Product Assessment whether the design of the program achieved its major.objective

4 Choices in Discrepancy:1. Proceed to the next stage of

.evaluation if no discrepancy exist2. , If a discrepancy exist recycle

through the existing stage after there has been a change in either

’ the program s standards or.operations

3. #2 , If cannot be accomplished then 1 – recycle back to stage Program

, , definition to redefine the program then begin the discrepancy

1.evaluation again at stage4. #3 If cannot be accomplished

.terminate the program

Effectiveness of the Provus Discrepancy Model:

• When the type of evaluation desired formal and the program is in the formative, rather than summative stages.

• When evaluation is defined as continuous information management addressing program improvement and assessment, and where evaluation is a component program development.

• Where the purpose of evaluation is to improve, maintain or terminate a program.

Effectiveness of the Provus Discrepancy Model:

• Where the key emphasis of evaluation is program definition and program installation.

• Where the roles of the evaluator are those of facilitator, examiner of standards, observer of actual behaviors and design expert.

• When at each stage of evaluation program performance is compared with program objectives ( standards ) to determine discrepancies.

Effectiveness of the Provus Discrepancy Model:

• Where the program evaluation procedure is designed to identify weaknesses and to make determination about corrections or termination.

• Where the theoretical construct is that all stages of programs continuously provide feedback to each other.

Effectiveness of the Provus Discrepancy Model:

• Where the criteria for judging programs includes carefully evaluating whether:

• The program meets established program criteria

• The actual course of action taken can be identify and

• A course of action can be taken to resolve all discrepancies

Statement of the Problem

The study assessed the ABC College of Nursing

based on CHED STANDARDS, pursuant to CMO no. 30 s. 2001 in an attempt to prepare a Five-Year Development Plan.

Specifically, the study sought to answers to the following questions:

Specifically, the study sought to answers to the following questions:

• Is there a discrepancy between the standards and the existing conditions?

• What problems are exhibited in the 10 areas identified?

• What development plan for the College of Nursing could be evolved for the next 5 years?

Provus Discrepancy Evaluation Model

by MALCOLM PROVUS

5 StagesS – StandardP – Program PerformanceC – Comparison of S with PD – Discrepancy Information Resulting

from CT – TerminateA – Alteration of P or S

SS

CC

PP

DD

TT

AA

KERRIGAN EVALUATION

MODEL

by John Kerrigan

Kerrigan Evaluation Model(Pre-service Orientation Program for Staff

Nurses)

Training Activity

(Reaction Criteria)Did the

Trainees enjoy the training?

Trained Persons

(Learning Criteria)

What did the trainees learn?

The Job on Organization(Behavioral

Criteria)Did the

Trainees behavior

change on the job?

Results in Job and

Organizational Performance

(Results Criteria)Did the

Organization or project improve in

performance

Sample Research

An Evaluation of the Pre-Service Orientation Program for Newly

Employed Staff Nurses in Hospitals: a Kerrigan Evaluation

Model

Statement of the Problem

• The major aim of this study was to evaluate the Pre-Service Orientation Program for newly employed staff nurses of government and private hospitals in Cebu City from January to June 2009.

Specifically it sought to answer the following questions:

1. Reaction Criteria1.1 What is the profile of the trainees?1.2 What are the reactions of the trainees regarding

the program/training?

2. Learning Criteria2.1 What did the trainees learn?

(cognitive, affective, psychomotor)

Specifically it sought to answer the following questions:

3. Behavioral Criteria3.1 Did the trainees behavior change on the job? (before – after)

4. Results Criteria4.1 Did the organization/institution improve in their performance based on the training?

GOAL- FREE EVALUATION

PRESENTATION OUTLINEA.Definition of Goal – Free Evaluation

(GFE)Purpose and activities of GFEA.Arguments for GFEB.Four Reasons for Doing GFEC.Implementation considerationsD.Framework

GOAL – FREE EVALUATION

• An inductive and holistic strategy aimed at countering the logical – deductive limitations inherent in the usual quantitative goals-based approach to evaluation

GOAL – FREE EVALUATION

• It involves gathering data on a broad array of actual effects and evaluating the importance of these effects in meeting demonstrated needs without being constrained by a narrow focus on stated goals.

GOAL – FREE EVALUATION• GF evaluator avoids learning the

stated purpose/goals/intended achievements of the program prior to or during the evaluation and interviews program consumers.

• Only the program’s actual outcomes and measureable effects are studied, and these are judged on the extent to which they meet demonstrated participant needs

GOAL – FREE EVALUATION

• This prevents tunnel vision, or only looking at the program as it pertains to the intended goals at the risk of overlooking many positive and/or negative unintended side-effects

• The evaluator thus can be open to whatever data emerge from the phenomena of the program

GOAL – FREE EVALUATION

Lends itself to qualitative methods as it relies

heavily on description and direct experience

with the program

GOAL – FREE EVALUATION

GFE Evaluator asks: What does the program actually do? Rather than, What does the program intend to do?

“Merit is determined by relating program effects to the relevant needs of the impacted population (Scriven, 1991. p 180)

GOAL – FREE EVALUATION

A comprehensive needs assessment is conducted simultaneously with data

collection.

GOAL – FREE EVALUATION

“ The evaluator should provide experiential accounts of program activity so that readers of the report can, through naturalistic generalization, arrive at their own judgment of quality in addition to those the evaluator provides, (Stake, 2004 in Alkin, 2004, p.215)”

GOAL – FREE EVALUATION

Can be conducted along with goals-based evaluation, but

with separate evaluators using each approach to

maximize its strengths and minimize it weaknesses as

proposed by Scriven

GOAL – FREE EVALUATION

Defining question: What are the actual effects of the program on clients (without regard to what staff say they want to accomplish)? To what extent are real needs

being met?

Arguments for the Utilization of GFE

• It may identify unintended positive and negative side-effects and other context specific information.

• As a supplement to a traditional evaluation, it serves as a form of triangulation both data collection methods and data sources.

Arguments for the Utilization of GFE

• It circumvents the traditional outcome evaluation and the difficulty of identifying true current goals and true original goals, and then defining and weighing them.

• It is less intrusive to the program and potentially less costly to the client.

Arguments for the Utilization of GFE

• It is adaptable to changes in needs or goals.• By reducing interaction with program staff, it

is less susceptible to social, perceptual, and cognitive biases.

Arguments for the Utilization of GFE

• It is reversible; an evaluation may begin goal free and later become goal-based using the goal-free data for preliminary investigative purposes

Arguments for the Utilization of GFE

• It is less subject to bias introduced by intentionally or unintentionally trying to satisfy the client because it is not explicit in what the client is attempting to do; it offers fewer opportunities for evaluator bias or corruption because the evaluator is unable to clearly determine ways of cheating

Arguments for the Utilization of GFE

• For the evaluator, it requires increase effort, identifies

incompetence, and enhances the balance of power among

the evaluator, the evaluee and client.

Arguments for the Utilization of GFE

It focuses on human experience and what people actually do and feel,

allows for understanding how program implementer deal with its

nonprogrammed decision [1]

(Stake, 2004; in Alkin, 2004).[1] Non programmed decisions are decisions regarding relatively novel problems, or problems

that an individual, group, organization or entity has never encountered ( George & Jones, 2000)

4 Reasons for doing GFE (Scriven, 1972)

1. To avoid the risk of narrowly studying stated program objectives and thereby missing important unanticipated outcomes;

4 Reasons for doing GFE (Scriven, 1972)

2. To remove the negative connotations attached to the discovery of unanticipated effects, because “the whole language of ‘side effects’ or ‘secondary effect’ or even ‘unanticipated effect’ tended to be a putdown of what might well be the crucial achievement, especially in terms of new priorities;

4 Reasons for doing GFE (Scriven, 1972)

3. To eliminating the perceptual biases introduced into an evaluation by knowledge of goals;

4. To maintain evaluator objectivity and independence through goal-free conditions.

GFE as a Supplement to GBE• The weaknesses in exclusively using any one

approach (either GBE or GFE) are significantly minimized by combining the two; consequently, the validity of the synthesized final evaluation is enhanced.

• GBE and GFE are combined by having the GB and GF evaluators designs and conduct their evaluations independently.

GFE as a Supplement to GBE• In synthesizing the GBE and GFE, evaluators

interpret the data while discussing whether the evaluation methods, results, and conclusion support or contradict each other; and the evaluators (possibly with key stakeholders)weigh the data from both approaches to make an evaluative conclusion.

Goal-Based Evaluation

Formative/Summative Evaluation

Implementation

Goals/Objectives