how much does evaluation matter? - world bankdocuments.worldbank.org/curated/en/... · how much...

12
WBI EV ALUA TlON UTILIZATION How Much Does Evaluation Matter? A Follow-up on the Tracer Study OD Training-of-Trainers Seminars in Africa j MitaMarra Public Disclosure Authorized Public Disclosure Authorized Public Disclosure Authorized Public Disclosure Authorized

Upload: trinhlien

Post on 25-Jul-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

WBI EV ALUA TlON UTILIZATION

How Much Does Evaluation Matter? A Follow-up on the Tracer Study OD Training-of-Trainers Seminars in Africa

j MitaMarra

Pub

lic D

iscl

osur

e A

utho

rized

Pub

lic D

iscl

osur

e A

utho

rized

Pub

lic D

iscl

osur

e A

utho

rized

Pub

lic D

iscl

osur

e A

utho

rized

wb394321
Typewritten Text
63360

WSI EVALUATION UTILIZA TION How Much Does Evaluation Matter?

A Follow-up on the Tracer Study on Training-of-Trainers Seminars in Africa

Mita Marra

WBI Evaluation Studies Number ES99-27

World Bank Institute The World Bank Washington, D.C.

COpyright © 1999 The International Bank for Reconstruction and DevelopmentlThe World Bank 1818 H Street. N.W. Washington. D.C. 20433. U.S A

The World Bank enjoys copyright under protocol 2 of the Universal Copyright Convention. This material may nonetheless be copied for research. educational. or scholarly purposes only in the member countries of The World Bank. Material in this series is subject to revision. The findings, interpretations. and condusions expressed in this document are entirely those of the author(s) and should not be attributed in any manner to The World Bank. to its affiliated organizations. or the members of its Board of Directors or the countries they represent. If this is reproduced or translated. WBI would appreciate a copy.

WBI EVALUA TION UTILIZA TION

TABLE OF CONTENTS

Introduction.....................................................................................................1

II Intended Use of the Evaluation.......................................................2

III Nuances of 'the Evaluation Design..............................................3

IV Actual Use of the Evaluation ..........................................................4

A. The Instrumental Role ..................................................................4

B. Enlightenment ....................................................................................5

V ~ollaborating for Improving Evaluations and Programs...............................................................................................6

Endnotes ...............................................................................................................7

WBI EVALUA TION UTILIZATION

HOW MUCH DOES EVALUATION MATTER?

A Follow-up on the Tracer Study on Training-of-Trainers Seminars in Africa

"More recent definitions of evaluation ... emphasize providing useful information for program improvement and decision making. This broader conceptualization has directed attention to the political nature of evaluation, the need to integrate evaluation into program processes, working with stakeholders throughout the evaluation process and laying a solid foundation for the use of evaluation. "

Michael Quinn Patton, 1997.

I Introduction

This paper offers empirical evidence of the utilization of the evaluation findings of two Training of Trainers (TOT) seminars held by the Environment and Natural Resources group of the World Bank Institute (WBI) 1 in Kenya and Ghana, in 1996 and 1997 respectively. The perspective from which the utilization is considered is mainly the task managers,,2 who requested a tracer studl on the two workshops. This analysis sheds light on how evaluation-informed knowledge has resulted in the modifying of program implementation and goals.

Two explanatory models are adopted here to understand how the findings of this particular tracer study evaluation have been used, namely the instrumental and the enlightenment model.

The instrumental perspective assumes a rational decision-making process; decision makers have clear goals, seek direct attainment of these goals, and have access to relevant information, including evaluation information.

From the enlightenment perspective, on the other hand, users base their decisions on a gradual accumulation and synthesis of information. Weiss describes enlightenment evaluation as providing a background of working understandings and ideas that "creep" into decision deliberations.4 Evaluation results are incorporated gradually into the user's overall frame of reference; in this perspective, evaluation is used not only in action but also in thoughtS. As

2

Vedung points out, program implementers, designers and other stakeholders receive cognitive and normative insights through the evaluation, which may contribute to a thorough scrutiny of the premises of the program, and a deeper understanding of program merits and limitations?

The two models are not mutually exclusive; rather, they complement each other. Both are construed as empirical theories. As shown in the following paragraphs, there are cases in which policy-makers set clear targets and, by means of evaluation, succeed in achieving them in ways that resemble instrumental use. In most situations, however, reality deviates from the instrumental model. "Evaluations may have significant consequences but not in the linear sequence asserted by the instrumental view. The role of evaluation in political and administrative contexts is many-sided, subtle, and complex." 8

II The Intended Use of the Evaluation

The two TOT seminars were included in a multi-year program on Agricultural Policy Analysis and Institutional Reform carried out by the Environment and Natural Resources group (WBIEN) of the World Bank Institute with twenty-three partner institutions in Africa. Specifically, the workshops were organized for trainers coming from African universities and governments. The goal of the program was to build analytical and training capacity in national governments and universities to support agricultural policy reforms.

The end-of-seminar (EOS) evaluation of the Ghana workshop indicated that the seminar should have been tailored more speCifically to targeted trainer­participants' needs. As another TOT event was to be held in Swaziland, task managers felt the need to go beyond the EOS evaluation results. Their purpose was to obtain a deeper understanding of participants' learning and capacity to apply new teaching skills. These outcomes take some time to fully play out; as EOS evaluations are carried out immediately after training events, such evaluations are unable to detect these longer-term outcomes.

More specifically, the task manager was mainly concerned with gaining insights about the pedagogical aspects of the two workshops; that is, whether trainers adopted the training methodologies and materials in their own curricula. Whether and how much the workshops had contributed to creating networks and partnerships among universities were also relevant issues.

In this context, the tracer study evaluation was expected to provide feedback to improve the subsequent offerings of the training program. To incorporate a set of key pedagogical recommendations into the subsequent seminar represented, therefore, the intended utilization of the evaluation findings in the short run.

3

III Nuances of the Eval uation Desig n

The evaluation took the form of a tracer study, incorporating both workshops because of their similar formats and low numbers of participants. This was a formative evaluation, as it was designed to help redress the program according to participants' suggestions and needs. It was also summative evaluation, to the extent that it was expected to provide insights about the effectiveness of the program itself, particularly the extent to which the seminars facilitated the desired networking among African universities.

The evaluative questionnaire was formulated by an evaluation team in collaboration with the two task managers. It was designed for trainer­respondents and was comprised of five questions9

. The questionnaire solicited respondents' self-assessment of their learning about specific economic and policy analysis tools. Furthermore, it asked whether the training was relevant to job needs and whether the teaching skills had been applied in participants' work environments.

In terms of Kirkpatrick's four level model10 of training evaluation, the Agricultural Policy tracer evaluation focused on levels two and three-that is, participants' self-reported learning and change in behavior on the job as a result of the training. Questions nos. 1 and 3 addressed these levels of interest directly, as follows: .

To what extent do you feel the workshop you attended enhanced your capacity as a trainer and facilitator: (a) To prepare training activities; (b) To prepare and utilize case studies; (c) To prepare and deliver more effective presentations; (d) To facilitate more effective group and team work?

Have you been able to use the training materials provided at the workshops in any of the following areas: (a) In teaching in university; (b) In research; (c) In in-service training; (d) In consulting?

Another major area of interest of the evaluation was the training capacity­building aspect of the workshops. Question no. 4 explicitly asked whether collaborations and partnerships were created after the workshop attended.

The response rate was not high: out of twenty-eight mailed questionnaires, only thirteen responses were received. Nevertheless, the sample of respondents was demographically representative of those who attended the two courses.

4

Respondents provided relevant information which was worthy of being taken into account. Following are some examples of participants' responses:

" .. ..1 largely adopted the text-book about environmental and nature resources economics in my courses .... "

"I use the Tietberg textbook for the environmental economics class ... "

IV The Actual Use of the Evaluation

A. The I nstrumental Role

No surprises resulted from the study, according to the task managers. The evaluation findings largely confirmed the positive expectations task managers had, particularly regarding the format of the workshops and the materials provided.

".. The group and team work triggered discussion and facilitated the communication with the other participants ... "

" ... 1 adopted the PAM (policy analysis matrix) for preparing presentations .. . "

These favorable comments on the training pedagogy assured task managers that their collaborative approach in organizing and conducting the seminars was effective and thus to be continued in the future.

From this perspective, the tracer study had an instrumental use: that is, the evaluation findings. resulted in a learning sequence whereby information provided was analyzed and a decision was made. Results which flowed from that decision became inputs for the next decision. Within this learning process, knowledge comes first and gives rise to attitude change which, in turn, conditions action and contributes to problem solving processes. 11

For instance, the respondents' frequent remarks regarding the need for case studies from the African context alerted task managers to look for more specific examples to be incorporated into the presentations.

" ... 1 really appreciated the case study methodology, even though they should be more and specifically related to African contexts. "

5

Respondents noted, in fact, that case studies are important not only for training purposes, but also for influencing senior policy-makers and for learning about events and activities in other countries.

As a result, four more case studies were included within the course material package for the Swaziland seminar, held in February 1998. This was of significant relevance since-as task managers report13-the case study tool provided a means to establish common ground with participants. The analysis of issues which were familiar to participants helped focus the seminar closer to their scientific interests.

B. Enlightenment

As previously mentioned, the tracer evaluation combined both workshops whereas the end-of-seminar evaluations examined each activity individually and immediately after they took place. Through the data collection and analysis, the evaluators reconstructed the context and implementation of the TOT program, seeking to perceive how it differed from its original blueprint. Furthermore, through task managers' interviews and the content analysis of the tracer questionnaires, the evaluators looked into the assumptions about the causal mechanisms underlying the program activities. They questioned the logical premises on which the program was built. The analysis brought to the forefront not only the strengths and weaknesses of program implementation, but also the conceptual flaws concerning the overall program design, and opportunities for improvement.

The focus of the evaluation was on the training capacity building component, which is an intermediate objective of the program. The findings highlight, in fact, that task managers' expectations of the multiplier effects of workshops were only partially fulfilled. Collaborations and partnerships did not automatically result from the two workshops. The most frequent participant complaint was the difficulty of maintaining contacts because of the high communication costs. Furthermore, they pinpointed the lack of follow-up 'activities between the two seminars as the other major obstacle to collaboration and consolidation of partnerships. From this perspective, the evaluation findings were enlightening in that they stressed the urgency to rethink WBIEN's approach to enhancing training capacity in African universities.

Indeed, the issue of capacity building was raised during the Swaziland seminar held in February 1998. As task managers report14

, the evaluation conclusions gave rise to discussion among participants, who decided to take autonomous steps to develop common training activities and scientific research projects. Likewise, the task mangers committed to assuring systematic follow-up by delivering updated training materials. Furthermore, they recognized the importance of favoring not only bilateral contacts between WBI and each partner-institution, but also of promoting multi-lateral interactions among partner­

6

institutions and WBI in a broader network of scientific and organizational exchanges.

The evaluation, therefore, resulted in an incentive to take action for better performance and outcomes. Participants at the Swaziland seminar were exposed to the capacity building issue and were asked specifically to conceive of initiatives to be undertaken to create effective collaboration. The task managers, on the other hand, acquired deeper insights regarding the actual functioning of the program-enabling them to better design and implement training activities.

Furthermore, the tracer evaluation was not introduced at the planning stage of the TOT program as a tool to guide action in progress. It took place after the first two seminars, and before the ·Iast seminar held in Swaziland. However, it proved to be a non-training means to further engage the participants of the two TOT seminars. By filling the tracer study questionnaire, respondents were solicited to recall the workshops' content and organization, elaborate on their learning experience, and relate it to their actual work context. The evaluation directly involved the participants and as R.ossman and Rallis point out, Uthis process generates stories. Folks talk about the study and the routines involved in its conduct. This 'talk' becomes part of cultural knowledge that offers new and often satisfying interpretations of familiar events."15 The evaluation, therefore, turned out to be a follow-up activity within the TOT program itself, with a symbolic use, toO.16 .

V Collaborating for Improving Evaluations and Programs

The links between knowledge generation and utilization are seldpm clear and direct. Specific information cannot be identified as the basis for a particular decision or action. As with every learning process, evidence of utilization may require time to become apparent in decisions and actions.

Gaining agreement on purpose, expectations, and potential use of an evaluation study is critical since evaluation means different things to different people, and since different stakeholders may have different interests. Therefore, close and collaborative relations among program deSigners, implementers and evaluators very likely assures better quality information for better program design and implementation.

7

1 Formerly the Economic Development Institute (EDI). 2 They are Chris Gerard and Daniel Sellen. 3 Tracer studies in the training evaluation field are evaluations which are conducted between six months and one year after the training event took place. Their objective is to inquire about the medium-term impact of training. 4 See G. B. Rossman and S. F. Rallis, Learning in the Field, SAGE Publications, 1998. . 5 See C. Weiss, "The Many Meanings of Research Utilization,· in Public Administration Review, 1979, p.426-31. 6 See C. Weiss, "The Many Meanings of Research Utilization,' in Public Administration Review, 1979, p. 426-31. 7 See E. Vedung, Public Policy and Program Evaluation, Transaction Publishers, News Brunswick (USA) and London (UK). e See E. Vedung, op. cit.. e The first two questions are closed-ended with a six-point rating scale and they are further articulated in four sub-questions. The other two are yes-no questio'ns and partially open-ended. The last one leaves space for participant comments and suggestions. 10 See D. L. Kirkpatrick, Evaluating Training Programs, Berret-Koehler Publishers, San Francisco, 1994. 11 Underlying this notion of evaluation use is the Engineering Model of Evaluation Utilization which is based on the so-called rational organization model. "Evaluation research sets no goals, neither does it discover or clarify problems. The proper role of evaluation is, given politically decided goals, to elicit, through the use of experimentation, the most efficient means to reach the indicated goals." See V. Vedung, Public Policy and Program Evaluation, Transaction Publishers, New Brunswick ~US) and London (UK), 1997. 2 Based on interviews with Daniel Sellen and Chris Gerard.

13 Based on interviews with Daniel Sellen and Chris Gerard. 14 Based on interviews with Daniel Sellen and Chris Gerard. 15 See G. B. Rossman, and S. F. Rallis, Learning in the Field, SAGE Publications, 1998. 16 Rossman and Ra-llis state that explanation and understanding are important human needs. They note that human beings tend to look for patterns and create narratives that make sense of the world and its phenomena. Research can address this need by offering symbolic explanations that groups of people can share. Qualitative research can sort and explain, making complex and ambiguous experiences or beliefs comprehensible and communicable to others. See op. cit., pp. 14.