289 radhakrishna vol 8.2 2

Upload: btkazemi

Post on 05-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/31/2019 289 Radhakrishna Vol 8.2 2

    1/8

    Summer 2001 7

    Evaluating International Agricultural and Extension Projects:

    Problems, Challenges, and Strategies

    Rama B. RadhakrishnaThe Pennsylvania State University

    Department of Agricultural and Extension Education

    Outstanding paper presented at the 17th

    Annual Association for International Agricultural andExtension Education Conference, Baton Rouge, LA, April 4-7, 2001

    Abstract

    This study examined evaluation issues related to international agricultural andextension education projects. Objectives of the study were to: 1) identify anddescribe problems and challenges to evaluating international agricultural andextension education projects, and 2) review evaluation models that are appropriate for

    international agricultural and extension education projects. Data sources includedbooks, journal articles, conference proceedings, evaluation reports, governmentdocuments, and interviews with select faculty. Findings revealed that there are manychallenges to evaluating international agricultural and extension education projects.These include: 1) lack of time resulting in inadequate plans to evaluate projects, 2)extensive reliance on single method of evaluation, 3) lack of readily availableevaluation instruments, and 4) cultural and language problem. Four evaluationmodels appropriate for international agricultural and extension education were alsoidentified and described. Based on the information obtained through literature andfaculty interviews, a new framework to evaluate international agricultural andextension education projects was proposed.

    Introduction

    Increased emphasis is placed on projectperformance and outcomes by government,donor agencies, and universities engaged ininternational agricultural development. A keycomponent of any proposal related tointernational agricultural development containsa section on evaluation criteria andmethodology. Projects that have cost-benefit

    analysis as a major objective are easy tomeasure and document the outcomes. However,projects that deal with educational programs(creating awareness, knowledge and skilldevelopment, and understanding of problemsand/or issues) are difficult to measure. One ofthe challenges is defining appropriate impactindicators. Early identification of defined andneeded indicators through program objectivesare not emphasized (Mustian, 1999 and

    Radhakrishna, 1999). According to Bennett(1994), early identification of indicators throughmeasurable program objectives will helpstrengthen planning and evaluating educationalprograms. Therefore, project investigators andevaluators should consider action specifying thechain of program events and the kinds ofevidences and/or appropriate indicators foreach event in the chain (Verma and Burnett,1999).

    Several researchers and evaluators haveaddressed the problems and challenges relativeto evaluating educational programs in bothdomestic and international settings (Bayles,1998; Hoffstrom and Mcdaniel, 1996; Mustian,

    1999, Richardson, 1998, Radhakrishna, 1999;and Verma, 1998). Bayles (1998) examinedevaluation efforts of 147 agricultural projectsfunded by the United States Agency for

  • 7/31/2019 289 Radhakrishna Vol 8.2 2

    2/8

    8 Journal of International Agricultural and Extension Education

    International Development (USAID) between1985 and 1995. Of these 147 projects, 68 werein Africa, 49 in Latin America and 30 in Asiainvolving a total budget of $2.3 billion.Findings from Bayless study revealed: 1) only

    94 projects (64%) reported at least oneevaluation, 2) seventy-two projects included anassessment of socio-economic impact and onlyone project had an impact evaluation, and 3)more number of projects in Asia and LatinAmerica were evaluated when compared toprojects in Africa. He found that the evaluationefforts of most projects that had cost-benefitanalysis were successful and projectinvestigators met the evaluation criteria ofUSAID. However, a majority of projects that

    dealt with educational programs did not provideany evaluation information. Major problems inevaluating projects included frequent changes inmanagement, unrealistic project designs orgoals and a lack of baseline data. Tilburg andHaan (1995) and Gow and Morss (1988)reported limited availability of project data as amajor hindrance to project monitoring andevaluation in developing countries.

    Chambers (1991) identified six biases that

    might limit the evaluation of agricultural andrural development projects in developingcountries. The six biases were: 1) spatialproject staff and researchers focusing only onurban centers and roadside projects, 2)projectevaluation plans showing little interestin what happens to the rural poor and the

    disadvantaged, 3) persontend to reach or getinformation from elite groups, 4) dry-seasonfew visits during dry season resulting ininadequate assessment of flooding or drought,

    5) diplomaticcombination of politeness, fear,embarrassment, and language problemsfrequently deter visitors from speaking to thepoor and the underprivileged, and 6)professionaltend to reach wealthier, bettereducated, and more progressive farmers.

    Hoffstrom and Mcdaniel studied post-trainingevaluation of Cochran Program since 1983 andfound that their evaluation efforts were

    successful in getting valuable feedback.Mustian (1999) indicated that extensioneducators need to focus on program modelswhere outcomes are the basic function.Similarly, Richardson (1998) stated that

    extension educators have major roles to play indocumenting project outcomes. Any programsand models used in international settings shouldalso address the expectations that agriculturaland extension programs do bring changes inindividuals, families and communities (Verma,1998).

    The Joint Committee on Standards forEducational Evaluation (1994) proposedstandards that should be considered for

    evaluating educational efforts. The committeeidentified a total of 30 standards grouped acrossfour categoriesutility standards, feasibilitystandards, proprietary standards, and accuracystandards.

    Purpose and Objectives

    The overall purpose of the study was to describeproblems, challenges, and strategies relative toevaluation efforts in international agricultural

    and extension education projects. This paperspecifically examined: 1) the problems andchallenges in evaluating internationalagricultural and extension education projects, 2)identify and describe evaluation models that areappropriate for international agricultural andextension education projects, and 3) suggeststrategies to overcome problems and challengesto evaluating international agricultural andextension education projects.

  • 7/31/2019 289 Radhakrishna Vol 8.2 2

    3/8

    Summer 2001 9

    Methods and Data Sources

    Review of literature and personal experience ofthe author were the data sources for the study.A number of books, journal articles, and

    conference proceedings and governmentdocuments were reviewed to identify problemsand challenges in evaluating internationalagricultural and extension educationprograms/projects. Information from informaldiscussions and interviews with faculty werealso documented as evidence of data.

    According to Seidman (1998), interviews permitexplicit focus on the researchers personal

    experience combined with those of the

    interviewees (p. 113). In addition, fourevaluation models were also identified,reviewed and described.

    Results and/or Conclusions

    Problems and Challenges

    The first objective was to identify problems andchallenges to evaluating internationalagricultural and extension education programs

    and/or projects. The following problems andchallenges were identified: 1) lack of timeresulting in inadequate plans to evaluateprojects, 2) greater demand by donor agenciesto determine impact or outcomes ofeducational programs, 3) problems in definingappropriate impact indicators to show that

    project targets have been achieved, 4) scatteredsources of evidences, program impact versusother sources of change, 5) assessments dontgo beyond KOSA (knowledge, opinion, skill

    and aspiration) level, 6) extensive reliance onsinglemethod of evaluation, 7) lack of readilyavailable evaluation instruments, 8) cultural andlanguage problems, 9) lack of time and limitedskills in planning, implementing andinterpreting evaluation results, 10) limitedfeedback from donor agencies or sponsors, and11) limited availability of project data.

    Evaluation Models

    Based on an extensive review of literature andauthors own experience in studying evaluationmodels, four models were identified and found

    useful to evaluate international agricultural andextension education programs. The four modelsidentified were: 1) Francine Jacobs evaluationmodel, 2) Robert Stake evaluation framework,3) Rockwell and Bennetts Targeting Outcomesof Program (TOPs) model, and 4) Kirkpatricksevaluation framework. Jacobs five-tierevaluation model included pre-implementation,accountability, program clarification, progresstoward objective and program impact (Figure1). This model is very useful to guide planning

    and evaluation efforts for projects dealing withfamilies, communities, youth, and children.

    Robert Stakes framework to evaluationemphasizes targets, strategies, and outcomes(Figure 2). Stakes approach suggests that onehas to clarify intentions and then periodicallylook at what is actually happening. Suchapproach helps to make corrections as theprojects go through various stages ofimplementation. In addition, the model is

    particularly good at helping educators planthoroughly in a way that creates the kinds of

    information needed to design an evaluation.

    Bennett and Rockwell model is the mostcommon and widely used evaluation model inextension (Figure 3). This model has twopartsprogram development and programperformance on a continuum. Each of the partshas the same seven steps (input, activities,

    participation, reaction, knowledge, skills,opinions, aspirations and practice change). Inthe program development part, emphasis is onwhat needs to be done to bring about changes inprogram participants while in the programperformance part, educators examine whatactually happened as a result of the program.

  • 7/31/2019 289 Radhakrishna Vol 8.2 2

    4/8

    10 Journal of International Agricultural and Extension Education

    Accountability

    Pre-implementation

    Program Clarification

    Progress Toward Objectives

    Program Impact

    - Purpose- Audience

    - Tasks

    - Types of data

    collected/

    analyzed

    Figure 1: Jacobs Five-Tier Approach to Program Evaluation

    Targets

    Strategies

    Outcomes

    Figure 2: Stake Approach to Program Evaluation

    INTENTIONS ACTUALS

    What and who do we

    intend to improve?

    What did we intend to do

    and why?

    What outcomes

    actually resulted from

    our efforts and how

    do we know?

    What did we actually

    do and why?

    What outcomes do we

    want to happen as a

    result of our efforts?

    Who and what did we

    actually improve?

    SEE

    Conditions

    Practices

    KASA

    Reactions

    Participation

    Activities

    Input

    SEE

    Outcomes

    Practices

    KASA

    Reactions

    Participation

    Activities

    Input

    Program Development Program Performance

    Figure 3: Bennett and Rockwell (TOP) Model

  • 7/31/2019 289 Radhakrishna Vol 8.2 2

    5/8

    Summer 2001 11

    Finally, Kirkpatricks evaluation model is mostcommonly used for educational programs thatdeal with learning (Figure 4). It clarifies themeaning of evaluation in simple andunderstandable terms and offers guidelines and

    suggestions on how to accomplish anevaluation. This model has four levelsreactions, learning, job behavior and results.

    Review of these four models indicate bothcommonalities and differences in the approachto evaluation of international agricultural andextension education programs. In addition, thefour models have strengths and weaknesses.For example, Jacobs model asks for a variety ofevaluation information from different types of

    stakeholders. The model helps educators tothink through the various kinds of informationneeded within the five-tier framework. Themodel, however, provides little help toeducators in knowing what to do. In contrast,the Stake framework provides further depth ofdiscussion needed to identify information/datalisted in Jacobs one through four tiers. Both

    Bennett and Rockwell and Kirkpatricks modelsfocus more on educational outcomes and less onproject impacts. The challenge for internationalagricultural and extension educators is to gleanthe critical and good points from these models

    and use what is best for their projects.

    Therefore, a new evaluation framework wasproposed (Figure 5). The proposed model asksthree evaluation questions and outlines thedata/information needed to answer thequestions. The first question relates to problemassessment and corresponding goals andobjectives designed to address the problem.The second question relates to the desiredsituation or the ideal situation. Comparison of

    questions one and two will help identifystrategies to reduce or overcome the problem.In other words, what needs to be doneplan ofaction needed to bring about change. Finally,question three relates to what actuallyhappened, that is, did the project accomplish itsgoals and objectives.

    Kirkpatricks

    Evaluation

    Model

    Results

    LearningBehavior Change

    Reactions

    Figure 4: Kirkpatricks Training Evaluation Model

  • 7/31/2019 289 Radhakrishna Vol 8.2 2

    6/8

    12 Journal of International Agricultural and Extension Education

    Country

    (Project)

    Program

    Implementation

    Outcomes/

    Impact

    Objectives

    Goals

    Results

    What is? (current situation) What ought to be? (desired) What actually happened?

    What was theproblem?

    What did you do? Did you make adifference?

    Figure 5: Proposed Evaluation Model

    Summary of Faculty Interviews

    Four faculty in the department of agriculturaland extension education at Penn State wereinterviewed to provide perspectives oninternational agricultural and extensioneducation projects they conducted. These

    faculty had completed projects in countries ofAfrica, Asia, Latin America and EasternEurope. Interview questions focused around:1) involvement in international projects, 2) levelof evaluation expertise they possessed, 3) levelof importance given to project evaluation, 4)submission of evaluation reports to sponsorsand feedback, if any from sponsor, 5)barriers/challenges they encountered to evaluateprojects, and 6) lessons learned and strategies toimprove evaluation efforts. Each interviewlasted about 20-30 minutes in length. Inaddition, faculty also provided additionalinformation relative to their projects.

    Three of the faculty said that they gave greateremphasis to evaluation in their projects.However, one faculty indicated the emphasisdepended on the sponsoring agency. Almost allfaculty described their evaluation competency

    level as intermediate to basic. Use of evaluationmethods in their projects ranged from none toextensive. Evaluation methods used includedself-evaluations, needs assessments, self-designed evaluation surveys, pre and posttraining assessments, and focus groups. Threefaculty indicated that they submitted evaluation

    reports to their sponsors and feedback theyreceived included an acknowledgement andsome positive comments on completing theprojects on time.

    Faculty identified several barriers to conductingsystematic evaluations of internationalagricultural and extension education projects.These included: 1) lack of well developedsurvey instruments, 2) lack of appropriatemeasures or extensive use of single methods ofevaluation, 3) language and interpreterproblems, 4) lack of time to do a systematicevaluation; and in some instances, no desire onpart of the host country to do evaluations, 5)logistics, 6) program management, and 7)limited availability of skilled personnel tocollect data.

  • 7/31/2019 289 Radhakrishna Vol 8.2 2

    7/8

    Summer 2001 13

    Faculty offered several suggestions to overcomethe barriersplanning and evaluation from thestart, input from all participants (stakeholders),training host country personnel in evaluationmethods, repository of surveys and questions,

    pilot testing, culture and language training,enough time from project initiation to projectevaluation, and training of field-level staff indata collection.

    The need for offering training or workshopsfocusing on evaluation of internationalagricultural and extension education projectswas emphasized by all faculty. They indicatedthat training should focus on: variousevaluation models, evaluation plan, developing

    measurable objectives, designing surveys andquestions, quantitative and qualitativeevaluation methods, exposure to monitoring andevaluation software, and language and culturalsensitivity training. Faculty also suggested forgreater alignment of funding towards evaluationprocess, collaborative efforts, and feedback toimprove programs.

    Educational Importance

    The foregoing review suggests that there aremany challenges to evaluation in internationalsettings. In addition, the review also suggestswe must change to address many of thechallenges identified. The followingrecommendations or actions are offered basedon the findings of the study.

    First, an evaluation model matrix (Figure 6) forkey program areas in international agriculturaland extension education needs to be developed.

    Such a matrix would help identify appropriateevaluation models depending on the programareas. Early identification and selection ofmodels will help answer several evaluationquestions, including selection of evaluationmethods, the type of information to be collected,identification of appropriate measures andindicators, etc.

    Second, a need exists for creation of anevaluation information/resource exchangewhere faculty and students can share resourceson evaluation issues. Such an informationresource would save quite a bit of time and

    money for international agricultural andextension education programs/projects.

    Finally, a training program to build evaluationcapacity of international agricultural andextension educators should be offered. Inaddition, potential for delivery of training usingdistance technology should be explored.

    References

    Bayles, A.E. (1998). Identifying the typeand appropriateness of the evaluations ofselected agriculturally-related science andtechnology based USAID projects conductedbetween 1985 and 1995. Unpublished doctoraldissertation, West Virginia University,Morgantown.

    Bennett, C. & Rockwell, K. (1995).Targeting outcomes of programs (TOP): Anintegrated approach to planning and evaluation,

    http://deal.unl.edu/TOP/

    Gow, D., & Morss, E. (1988). Thenotorious nine: Critical problems in projectimplementation. World Development, 16,1399-1418.

    Hoffstrom, K., & McDaniel, M. (1996).Impact evaluation of Cochran FellowshipProgram in Latin America and Africa. UnitedStates Department of Agriculture, Foreign

    Agricultural Service, Washington, D.C.

  • 7/31/2019 289 Radhakrishna Vol 8.2 2

    8/8

    14 Journal of International Agricultural and Extension Education

    Jacobs, F.H. (1988). The five-tieredapproach to evaluation: Context andimplementation. In H. Weiss, & F. Jacobs(Eds), Evaluating Family Programs (pp. 37-68)NY: Aldine de Gruyter.

    Kirkpatrick, D.L. (1994). Evaluatingtraining programs. San Francisco: Berrett-Koehler.

    Mustian, D.R. (1999). Changing evaluationparadigms for agricultural and extensioneducation in the 21st century. Paper presented atthe Association of International Agriculturaland Extension Education Conference, Port ofSpain, Trinidad.

    Radhakrishna, R.B. (1999). Programevaluation and accountability needs of extensionprofessionals. Proceedings of the NationalAgricultural Education Research Meeting, 25,564-573.

    Richardson, J.G. (1998). Accountabilitydynamics in a state of change. Proceedings ofthe 32nd Conference, South African Society forAgricultural Extension, East London, South

    Africa.

    Tilburg, T., & Haan, J. (1995). Controllingdevelopment: Systems of monitoring andevaluation and management information forproject planning in development countries.Tilburg, Netherlands: Tilburg University Press.

    Seidman, I.E. (1998). Interviewing asqualitative research. A guide for researchers ineducation and the social sciences (2

    nded.). New

    York: Teachers College Press.

    Stake, R. E. (1986). Quieting reform:Social science and social action in an urbanyouth program. Urbana, IL: University ofIllinois Press.

    Verma, S. & Burnett, M. (1999).Addressing the attribution question in extension.Paper presented at the Annual Conference ofthe American Evaluation Association, Orlando,Florida.

    Verma, S. (1998). The attribution questionin Extension: A discussion of programevaluation strategy. Proceedings of the 32ndConference, South African Society forAgricultural Extension, East London, SouthAfrica.