altschuld, austin - 2006 - program evaluation concepts and perspectives

29
The Psychology Research Handbook: A Guide for Graduate Students and Research Assistants Program Evaluation: Concepts and Perspectives Contributors: James W. Altschuld & James T. Austin Editors: Frederick T. L. Leong & James T. Austin Book Title: The Psychology Research Handbook: A Guide for Graduate Students and Research Assistants Chapter Title: "Program Evaluation: Concepts and Perspectives" Pub. Date: 2006 Access Date: May 28, 2014 Publishing Company: SAGE Publications, Inc.

Upload: coripheu

Post on 22-Jul-2016

14 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

The Psychology ResearchHandbook: A Guide for Graduate

Students and Research Assistants

Program Evaluation:Concepts and Perspectives

Contributors: James W. Altschuld & James T. AustinEditors: Frederick T. L. Leong & James T. AustinBook Title: The Psychology Research Handbook: A Guide for Graduate Students andResearch AssistantsChapter Title: "Program Evaluation: Concepts and Perspectives"Pub. Date: 2006Access Date: May 28, 2014Publishing Company: SAGE Publications, Inc.

Page 2: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

City: Thousand OaksPrint ISBN: 9780761930228Online ISBN: 9781412976626DOI: http://dx.doi.org/10.4135/9781412976626.n5Print pages: 75-91

©2006 SAGE Publications, Inc. All Rights Reserved.

This PDF has been generated from SAGE knowledge. Please note that the paginationof the online version will vary from the pagination of the print book.

Page 3: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 3 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

http://dx.doi.org/10.4135/9781412976626.n5[p. 75 ↓ ]

Chapter 5: Program Evaluation: Conceptsand Perspectives

Concepts and perspectivesJames W.AltschuldJames T.Austin

At first it might seem strange or even somewhat misplaced that a handbook dealing withresearch in psychology would contain a chapter on evaluation. After all, evaluation isan activity that is carried out for a purpose quite different from that of research, that is,to guide practical decision making in applied settings. Add to this the fact that althoughthere is scientific study of evaluation, many scholars and empirical researchers still tendto look with disdain at the topic. It is perceived to be poor research, that is, researchoften conducted with much less control than the norm in articles in prestigious, refereedacademic journals. Beyond these issues is the further concern that evaluation is usuallynot based on theoretical understandings. For these reasons collectively, some socialscientists may not ascribe much value to evaluation.

So we must begin by addressing the subtle, implicit questions in the first paragraph.Why should evaluation be in this book, and why is it of importance and value forpracticing psychologists, psychology students, educational psychologists, and for thatmatter, others in social science? We will provide several answers to this question.

A first answer is that psychologists in a variety of environments (businesses,government organizations, educational institutions, etc.), in the course of their work,might be called on to evaluate an activity or a program whether or not they have beentrained as evaluators or have in-depth comprehension of it. Davidson (2002), forinstance, shows how this could occur for industrial-organizational psychologists. Theyare there in the situation, and they have had methodological training, so they mustbe able to do it. Essentially, this is a reason by default. Secondly, evaluation has anumber of unique methodological and substantive content areas that would be useful

Page 4: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 4 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

for psychologists to understand and be able to use in their work. A third answer is thatsome psychologists will undoubtedly lead or participate in evaluations as their mainprofessional focus.

Therefore, the purposes of this chapter are to provide an overview of the nature ofevaluation, to define it conceptually, to show its relationship to research, and to givea sense of what might have to be done to conduct an evaluation. Specifically, thechapter is organized as follows: Some background related to defining evaluation isdescribed; defining evaluation with a comparison of evaluation and research, basedon the definition, comes next; and implications of the definition for practice and generictypes of [p. 76 ↓ ] evaluation are the latter parts of the text. Last, a hands-on exercise,which helps in pulling together ideas, will form the chapter conclusion.

Toward a Definition of Evaluation

In 2002, the lead author (Altschuld, 2002) wrote about a presentation that he hadattended at an annual meeting of the American Evaluation Association (http://www.aea.org), a presentation that is relevant for defining evaluation. In a panelconducted by individuals who were nearing completion of graduate programs orwho had just received their degrees, perspectives and views of the quality of theeducational experience were being shared and examined. The panelists explainedtheir backgrounds and then briefly discussed what they thought about their graduateeducation in terms of learning and preparation for the job market.

One participant, a recent graduate, elaborated about his extensive coursework instatistics and measurement and the skills he had gained in these areas. Later whenquestions from the audience were appropriate, the coauthor asked him what in this setof courses would constitute training for the practice of evaluation. Stated alternatively,what really is evaluation, what does it mean to be an evaluator, what does an evaluatordo, how should one train for work as an evaluator, what should an evaluator know, andwhat specific skills should an evaluator have? After all, would it not be better to think ofwhat he had taken as simply methodology and just call it that? Further, he was askedif he had any courses that could be somehow thought of or construed as specifically

Page 5: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 5 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

emphasizing evaluation concepts, whatever those concepts might be, even if they weredistantly related to evaluation.

He pondered for a moment and to his credit replied that he had never considered thisline of inquiry, and in reality, his background could not be viewed as actually dealingwith evaluation. There was nothing specific in it related to, or called, evaluation.

The point here is not to discount being well grounded in methodology or that evaluatorsneed sound methodological understandings in order to be successful in the field. Thatis neither an issue nor debatable. Indeed, our advisees are required to take manyquantitative and qualitative methods courses. Rather, methodology, although necessaryand critically important for evaluation, constitutes, by itself, an insufficient condition forbeing adequately trained in the field and understanding its substance.

Evidence to support the above assertion can be found in international studies ofevaluation training conducted by Altschuld, Engle, Cullen, Kim, and Macce (1994) andby Engle and Altschuld (2003/2004). Also, the writing of individuals concerned withthe training of evaluators is pertinent here. For example, see Scriven (1996), Altschuld(1995, 2002), Mertens (1994), and the recent work of King, Stevahn, Ghere, andMinnema (2001) in regard to the critical skill domains required to be an evaluator.

In 1994, Altschuld and his colleagues identified 47 university-based evaluation trainingprograms, and in a repeat of the same study in 2001, Engle and Altschuld found 29programs. Although the number of programs is smaller, it is also apparent from studyingtrends across the two time periods that there has been a slow, steady, and noticeableemergence of a unique set of courses with specialized evaluation content. That contentis quite distinct from research and/or measurement methodology. Examples includeevaluation theory and models, applied evaluation design for field settings, evaluationand program planning, evaluation and its relationship to the formation of policy, cost-benefit analysis, readings and research in evaluation, and others (Engle, Altschuld,& Kim, 2005). Nearly all of these topics are not covered in traditional methodologyofferings or other courses in psychology and other social science disciplines, or they arecovered only tangentially.

Page 6: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 6 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

Several writers have discussed the skill sets needed for evaluation work. Mertens(1994) presented a taxonomy derived rationally from literature reviews and discussionswith colleagues. It consisted of grouping the knowledge and skills into the followingareas: research methodology, borrowed from other areas, and unique to specificdisciplines. The work of King and her associates (King et al., 2001) is particularlyrelevant because of its combination of rational and empirical methods. King etal.(2001) engaged 31 individuals representing a diverse set [p. 77 ↓ ] of backgroundsand contexts of practice in rating the relative importance for 69 domains or items ofevaluation capability. Using what they describe as a multiattribute consensus-buildingprocess, they reported that they achieved significant agreement on 54 of the 69 (78%)competencies. King etal. (2001) proposed four skill domains in which evaluators mustbe successful, not only doing evaluations but obtaining the results of evaluationsaccepted and used in making decisions. Those domains are (a) systematic inquiry,(b) competent evaluation practice, (c) general skills for evaluation practice, and (d)evaluation professionalism. Stevahn, King, Minnema, and Ghere (2005) followed upwith additional research and refinement. Finally, a project by the Canadian EvaluationSociety (CES) that focused on advocacy and professional development also addressedcapacity building. The literature review by McGuire (2002) focused on the benefits,outputs, processes, and knowledge elements of evaluation.

Given the nature of emerging curricula, the evolving dimensions of evaluative thinking,and the skill domains, what is evaluation? In what ways does it differ from research, andin what ways is it similar?

Defining Evaluation and Some UsefulDistinctions

From many sources (Scriven, 1991; Stuffle-beam, 1971, 1980, 2002; Worthen,Sanders, & Fitzpatrick, 1997), there are common features to the definition of evaluation.Synthesizing across these authors, we could define or think of evaluation as a processor set of procedures for providing information for judging decision alternatives or fordetermining the merit and worth of a process, product, or program.

Page 7: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 7 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

Imbedded in this collated, conceptual definition are subtleties that help us to distinguishbetween research and evaluation, as well as to explain the practice of evaluation. InTable 5.1, evaluation and research are briefly compared in terms of several dimensionsemanating from the definition. When examining the table, keep in mind that thecomparisons are somewhat exaggerated to aid the discussion and your understanding.Probably a better way to think about the linkage between evaluation and research is touse Venn diagrams with areas of overlap as well as areas of uniqueness.

In Table 5.1, evaluation is presented as being conducted in field situations to guide andfacilitate the making of decisions. This immediately implies that evaluators must identifywho the decision makers and key stakeholding groups are, the degree of their influenceon the entity being evaluated, and how their value positions and perspectives willaffect the evaluation. This point is so important that audience identification for resultsand involvement in the process of evaluation is highlighted as the first standard in theProgram Evaluation Standards (American Evaluation Association, 1995). Evaluatorsessentially work at the behest of decision makers, and the needs of multiple decision-making audiences have to be factored into all evaluations. Evaluators have to carefullymaintain their objectivity and avoid being co-opted (handling their results to favor apolitical or vested position), given the social and political environments in which theyexist.

Researchers, on the other hand, in the best of all possible worlds, are not working onsomeone else's priorities but more in terms of their own curiosity about phenomena.Their wonderment, their questioning, their desire to know and understand, and eventheir awe are what drives the enterprise, not what someone else wants. This is an idealpoint of view, but at the same time, we recognize that politics and values can influenceresearch. Political forces frequently push certain substantive areas or stress whatmethodologies might be acceptable at a point in time. Despite that, the ideal is one thatwe should strive for and cherish with regard to research.

Looking at the comparisons in the second row of the table, the relationship of evaluationto theory is not as strong or prominent as it is in research. Although there are theory-driven evaluations and the idea of theory in a variety of forms and connotationsdoes affect evaluation (see Chen, 1990), for the most part evaluations are directedtoward practical concerns of decision makers and stakeholders, not toward theory.

Page 8: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 8 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

“Proving” (demonstrating) the validity of a theory or proposition versus depictinghow well something works in a field setting (its [p. 78 ↓ ] impact or effect) and thendescribing what its implementation entailed is a critical distinction between research andevaluation. Evaluation looks at program performance and how it might be improved insubsequent use and has less of a theory orientation than does research.

Dimensions Evaluation Craft Research Craft

Driving force for endeavor Interests of decisionmakers and otherstakeholders

Personal interest andcuriosity of the researcher

Value/political positions ofmultiple groups/individualscome into play

Purposes Facilitate decision making Understand phenomena

Show how well somethingdid or did not work

Develop theory (ultimately)or to “prove” a proposition

Improve real world practice Add to body of knowledge

Degree of autonomy Variable to possibly verylimited, with evaluatoralways directly in midst ofthe decision-making milieu

Ideally, autonomy shouldbe very high (productionof knowledge should beunfettered)

Generalizability Often limited to the specificlocal environment

The greater thegeneralizability (over time,location, and situation) thebetter

Methodological stance Tends to be multimethod ormixed method in approach

Tends to be lessmultimethod in orientation

SOURCES: Adapted from Worthen and Sanders (1973, 1987); Worthen, Sanders, andFitzpatrick (1997); Altschuld (2002).

Page 9: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 9 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

It is fairly obvious that the evaluator, due to being imbedded in the complex and politicalrealities of schools and organizations, may have limited ability to be an independentand autonomous actor (or sense that that is the case—see the third row of the table).Furthermore, the evaluator may have to assume a political role in his or her work(Fitzpatrick, 1989). On the other hand, it is absolutely imperative for the researcherto be autonomous. Do we, in our kind of society, want drug or tobacco companies tobe able to control what the researcher does, what methods the researcher uses, andwhat, in turn, can be said publicly about the findings and results of the research? Thiscomparison is purposely exaggerated, to a degree. On numerous occasions, evaluatorswill experience relatively minor constraints on the conduct of their business, and insome instances the researchers may feel intense political heat and pressure.

Entries in the last two rows of the table further aid in seeing distinctions. Evaluationsare many times localized, and thus the results do not generalize. The interests of thedecision makers, the evaluators, or both do not relate to generalizability of results, andit often is not foremost in the minds of the evaluators. Evaluation stress is more on thelocalized findings and how they can affect change and improvement within a specificand narrow context.

Conversely, researchers in the social sciences are more attentive to the generalizabilityconcept. They have a need to publish in journals for reasons of advancement andsalary, as well as professionalism. Part of the scrutiny of those journals will be externalvalidity, in the Campbell and Stanley meaning of the phrase. External validity, althoughnot unimportant to the evaluator, will be of more prominence to the researcher. Asfar as methods go (see the fifth row), the demands of evaluation will tend to requirethe utilization of multiple or mixed methods applications. In fact, numerous writingshave dealt with how mixed methods can be incorporated into evaluation and needsassessment strategies (Altschuld & Witkin, 2000; [p. 79 ↓ ] Caracelli, 1989; Greene &Caracelli, 1997; Mark & Shotland, 1987; Tashakkori & Teddlie, 2002; Witkin & Altschuld,1995).

On balance, based on these brief comparisons, it may seem that research would bethe preferred activity. Just predicated on autonomy and the driving force underlying theendeavor, wouldn't people prefer to be researchers rather than evaluators?

Page 10: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 10 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

But evaluation has its advantages as well. Evaluation is closer to decision making,the formulation of policy, and program or project improvement. Evaluators may moredirectly and rapidly see the impact of their work on organizations and, in turn, those whoare the recipients of the services organizations offer. In contrast, the gratification fromresearch is most likely long-term in realization. Finally, it should not be overlooked thatevaluators conduct research on their field or incorporate research into their evaluationactivities, and many publish their findings in very reputable evaluation journals withrather high rejection rates (e.g. Evaluation Review, Evaluation and Program Planning,Educational Evaluation and Policy Analysis, and American Journal of Evaluation).

Implications of the Definition for Practice

How do evaluators go about their work? What may seem to be a simple process canbecome complex as we probe beneath the surface of our conceptual definition. Usingthat definition as our anchor, what must evaluators consider as they plan and conductevaluations? Table 5.2 contains a sampling of implications drawn from an analysis ofthe definition. The sampling, although incomplete, does reveal why some evaluationsare not easy to implement (and the subtle, almost hidden decisions that have to bemade in virtually any evaluation).

Keep in mind that, and we must emphasize that, the table is just a sampling of whatthe evaluator must think about when working with the highly varied consumers of his orher work. Furthermore, most evaluators do not have the luxury of doing evaluations injust one area (e.g., math education programs, training and development, social servicedelivery).

Professional evaluators tend to be eclectic, having to apply their knowledge of thepractice of evaluation and their experience to evaluate programs in many settingsand specialized subject matter fields. For example in our own backgrounds, wehave conducted evaluations in science education, reading, corporate training anddevelopment, team-building projects, and so forth.

A couple of the row entries are not dealt with in any great detail. As one example,the nature of what is being evaluated (row 3) can be extremely variable and may

Page 11: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 11 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

necessitate working with a subject matter expert. Consider the case of evaluating anew reading program or evaluating a health care delivery system from the perspectiveof a person who is seeking assistance for an elderly relative and trying to navigate themaze of our current health care system. Another example is multiple methods, whichis a very substantial topic. Both of these rows are important and could be explored indepth. Treating them to any appreciable degree far exceeds the scope of this chapter.

Most of the rest of Table 5.2 is self explanatory, so we will point out only several issues.In the first row, merit and worth decisions are noted. Merit refers to the quality of theidea underlying a new program or project. When we evaluate merit, we are assessing(usually via expert review and judgment) whether or not there is intrinsic value in anidea or proposed entity. Worth, on the other hand, refers to the outcomes achievedby a project. Are they substantial and of value? Cost-benefit analysis and return-on-investment methods can be used in the investigation of this aspect of evaluationdecision making.

In the second row, an essential and early part of any evaluation is ascertaining whothe decision makers are, their relative order in terms of importance, and if multiplelevels of decision makers with differing information needs are apparent. If the latteris the case and if there are disparate requirements for information, the skills of theevaluator may be sorely taxed, and tight evaluation budgets might be stretched to thelimit. In educational situations, the evaluator may be asked to provide data for high-leveldecisions and the formation of policy and, at the same time, be pressed for detailedand very [p. 80 ↓ ] [p. 81 ↓ ] specific suggestions for improvement by curriculumdevelopers. Test data, which will satisfy the first level, seldom provides much of use forthe second.

Page 12: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 12 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

In the fourth row, it is important to determine the time phase of the entity beingevaluated. This point has routinely been noted by numerous writers and in theirevaluation models. The well-known CIPP model, proposed by Stufflebeam in 1971,is predicated on four time phases going from the start of a project to its completion.In context evaluation, the need for the endeavor is assessed (and needs assessmentnow tends to be recognized as an important aspect of evaluation). Input evaluationis concerned with the adequacy of resources and strategies for resolving the need;process evaluation looks at how the solution is being implemented and what is workingand what must be modified to make the solution work better; and finally the purpose ofproduct evaluation is to find out what outcomes were achieved and the level to which

Page 13: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 13 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

they were achieved. Similar notions of time can be found in the work of Kirkpatrick(1959, 1998) and Holton, Bates, and Naquin (2000) in the development and evaluationof training programs and Altschuld and Kumar (1995, 2002) in the evaluation of scienceand technology education programs.

The last row underscores the fact that evaluation, at its core, is oriented toward theutilization of findings. Without utilization, evaluation would be an empty activity, devoidof value and meaning (an activity of little merit). But what is utilization and how does itcome about in complex organizations with internal and external power bases and theinterplay of political forces and expediencies? These are not easy matters to explain.

Leviton and Hughes (1981) observed that there are three types of utilization:instrumental, conceptual, and persuasive. Instrumental is utilization that occurs as adirect result of the findings and recommendations of an evaluation. Ideally, this is whatevaluators hope for and expect to occur. Unfortunately, for the most part in evaluationthinking, conceptual rather than instrumental utilization is what takes place. Conceptualutilization occurs when the ideas in the evaluation, the concepts imbedded in it, andthe findings come together to begin to affect the thinking and emerging understandingsof decision makers. Slowly, the evaluation is inculcated into the deliberation processand influences decisions that will occur much later. The effect of the evaluation isthere, but it is interacting in the process of change in almost indiscernible ways. Thismakes it difficult to trace the impact or effect of evaluation. Persuasive utilization isutilization intended to foster a political purpose, even one that is predisposed beforethe evaluation is undertaken. Obviously this last type of utilization is to be avoided ifpossible.

In 1993, Altschuld, Yoon, and Cullen studied the direct and conceptual utilization ofadministrators who had participated in needs assessments (NAs). They found thatconceptualization was indeed more prominent than instrumental utilization to theadministrators, although they had been personally involved in the NA process. Thisstudy would tend to confirm that conceptual is more the norm than instrumental.

More must be said about persuasive use. Certainly, the evaluator has to be aware ofthe potential for this negative situation and do his or her best to avoid it. If one is an

Page 14: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 14 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

external evaluator, a contract can always be turned down. If one is internal, then anadvisory group for the evaluation can ameliorate the political pressure to some degree.

But is politics always bad? Political concerns can, and in some circumstances do, playan important positive role in evaluation. Political pressure from opposing points of viewcan lead to balanced support for an evaluation of a controversial project or program.Legislators and others might simply want to know what is taking place and how well aprogram is working. They may desire to implement good public policy that helps ratherthan hinders.

Further, due to politics, the financial underwriting of the evaluation might be greater thanotherwise would be the case, making a better (not financially constrained) evaluationpossible. The controversy can help to illuminate issues to be emphasized in theevaluation and can even lead to more enthusiasm and ultimately “buy-in” for theimplementation of results. So the evaluator must monitor the political factors (positivesand negatives) of any evaluation and be prepared to deal with them as they arise.

[p. 82 ↓ ]

Generic Types of Evaluation

Reviewing the prior sections of this chapter, we find that different types of evaluationhave been woven into or alluded to in the discussion. Other classifications of evaluationare possible with the one given below having been used in the teaching of evaluationclasses at The Ohio State University. For another example of a classification schemesee the Worthen, Sanders, and Fitzpatrick (1997) text cited earlier.

In Table 5.3, an informal listing of types of evaluation is offered. It is intended to conveythe range of activities often conducted under the rubric or title of evaluation. With theentries in the table, a more comprehensive picture (in conjunction with the definition ofevaluation, its comparison to research, and key aspects of what evaluators do) shouldnow be coming into view.

Page 15: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 15 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

The entries in the table are, for the most part, self explanatory with perhaps theexception of needs assessment (NA). Need is defined as the measurable discrepancybetween the “what-is” status and the “what-should-be” status of an entity. Generally,the what-should-be status comes from collective value judgment. Examples [p. 83 ↓ ]would be the following: What should be the outcomes of a high school education? Whatshould be the wellness level of the U.S. population? What constitutes reasonable andcost-feasible access to drug therapies? And so forth. Current or what-is status canbe ascertained from a variety of methods including records, surveys, observations,databases, and so forth. Numerous methodological approaches are available forassessing needs, and over the last 20 years many texts have been written describingthe nature of needs and procedures for assessing them (Altschuld & Witkin, 2000;McKillip, 1998; Witkin & Altschuld, 1995).

After the states have been determined, the process continues with prioritizing theresulting needs or discrepancies and transforming the high priorities needs into

Page 16: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 16 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

organizational action plans to resolve or reduce the problem underlying or causing thediscrepancy. In other words, decisions to design and implement programs should bebased on clearly delineated needs. From some perspectives, needs assessment mightappear to be more of a planning topic type of activity, so why was it placed in Table 5.3as a type of evaluation work?

Several reasons support such placement. A well-implemented NA illuminates majorproblems, what key variables relate to them or are causing them, and what likelysolutions might be. It is the essential condition for designing and implementing goodprograms; and by virtue of its analytic properties, it should enhance the overallevaluation of solution strategies that are developed and used. Therefore, one of thereasons is that evaluation and NA are inextricably intertwined; they are two ends of onecontinuum.

A second reason is basically an offshoot of the first. A small (almost informal) nationalgroup of needs assessors felt that they could not survive as a small group; therefore,in the late 1980s they began to cast about for a larger society with which to affiliate.They chose the American Evaluation Association (AEA), based on the recognition thatevaluation and NA were highly related. Now, as a partial outgrowth of that event, mostevaluators see NA as important to the practice of evaluation. Other entries in Table 5.3could have been discussed in a similar fashion. Needs assessment was singled out asan exemplar to demonstrate the thought process.

Toward the Future

As we near the end of this exposition, consider for a moment evaluation as aprofession. As a discipline, evaluation emerged in psychology and in education duringthe 1930s. Ralph Tyler's Eight-Year Study and notions of curriculum evaluation werepioneer landmarks in education, and Rossi, Lipsey, and Freeman (2004) point tofield studies of evaluation conducted by Kurt Lewin and other social psychologists.Nonetheless, these were isolated instances and were not formalized in terms ofdepartment or area status, curriculum, journals, and professional organizations. Rossiet al. (2004) went on to define a boom period in evaluation following World War II andpointed out its emergence as a specialized area during the 1970s. Their indicators of

Page 17: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 17 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

emergence included books, journals, and associations. The books that have shapedthe discipline are numerous. They include pioneering classics (Suchman, 1967),methodological contributions (Campbell & Stanley, 1966; Cook & Campbell, 1979), andtextbooks across multiple editions (Fitzpatrick, Sanders, & Worthen, 2003; Rossi etal.,2004). Prominent journals devoted to evaluation range from the first, Evaluation Review(since 1976), to the American Journal of Evaluation (since 1978), and Evaluationand Program Planning (since 1980). Two key professional associations in the UnitedStates are the American Evaluation Association (http://www.eval.org), created in 1986through a merger between the Evaluation Network (ENet) and the Evaluation ResearchSociety (ERS), and the American Educational Research Association EvaluationDivision (http://aera.net). Many other professional organizations exist across nations,including the Canadian Evaluation Association (http://www.evaluationcanada.ca/), theEuropean Evaluation Society (http://www.europeanevaluation.org), the U.K. EvaluationSociety (http://www.evaluation.uk), and the Australasian Evaluation Society (http://www.parklane.com.au/aes). Instances of these features are depicted in Table 5.4.

[p. 84 ↓ ]

Why focus on the future of evaluation? We might reply, “For one major reason.”The demand for evaluation is increasing rapidly, even as university-based trainingprograms are tending to disappear (Engle, Altschuld, & Kim, in process). Thus, apriority need is to resuscitate graduate or other types of training in this importantdiscipline. Perhaps the multidisciplinary nature of program evaluation contributes to

Page 18: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 18 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

training issues due to the fact that evaluation training is housed in varied colleges anddepartments (education, psychology, social work, public policy, and management).Clearly, affiliation between psychologists and evaluators could be valuable for bothgroups. Davidson (2002), in the aforementioned article, indicated that there is agreat interest in the evaluation community about industrial-organizational psychologyissues and that many opportunities exist for forms of involvement. Edwards, Scott,and Raju (2003) edited a wide-ranging application of evaluation to human resourcemanagement. Consider further that the major professional society for evaluators, theAmerican Evaluation Association, runs a major international discussion list, EVALTALK(http://bama.ua.edu/archives/evaltalk.html). An informal group that coalesced at arecent Society for Industrial Organizational Psychology conference, the StrategicEvaluation Network, also runs an electronic mailing list specifically for people who doevaluation in organizational settings (see http://evaluation.wmich.edu/archives). TheAmerican Evaluation Association has a Business and Industry (B & I) Topical InterestGroup on evaluation (http://www.evaluationsolutions.com/aea-bi-tig), which welcomespresentation proposals (and audience members!) for the annual conference eachNovember (details at http://eval.org).

Other features of the field include knowledge dissemination and globalization. TheEvaluation Center at Western Michigan University is a professional clearinghouse forinformation, including bibliographies and checklists. The checklist project provides anumber of useful checklists for various evaluation functions (http://www.wmich.edu/evalctr/checklists/). The checklists are organized under categories: evaluationmanagement, evaluation models, evaluation values and criteria, metaevaluation, andother. There is even a checklist for developing checklists by Stufflebeam [p. 85 ↓ ](http://www.wmich.edu/evalctr/checklists/cdc.htm). Globalization is clearly evident at theCanadian Evaluation Association's Web site (http://consultation.evaluationcanada.ca/resources.htm). There are links to evaluation standards, for example from associations(Joint Committee on Standards for Educational Evaluation, 1994) and from nations(Germany, Canada, United States).

Page 19: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 19 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

Conclusion

In this postscript, we attempt to tie together the strands of evaluation presented inthis chapter. First, the placement of evaluation in a psychology research handbookmakes good sense for several reasons. Readers may be called on to consume orproduce evaluations. Respectively, these two terms mean reading and reacting toevaluation reports or designing, conducting, and reporting a program evaluation. Anindividual trained in social psychology, for instance, might be asked to conduct a needsassessment in order to design and evaluate a community-based collaborative conflictresolution program. On the other hand, an industrial-organizational psychologist mightbe charged with doing a training needs assessment in order to design a leadershipdevelopment program for middle and top managers.

Second, the nature of evaluation can be viewed as dynamic with exciting emergentdimensions and twists. Third, using the definition of evaluation as a “process or set ofprocedures for providing information for judging decision alternatives or for determiningthe merit and worth of a process, product, or program,” we presented useful distinctionsas to several types of evaluation (Table 5.3). Fourth, we discussed the structure ofevaluation as a profession in a limited way to provide some context. Two key sets ofprofessional standards are the five AEA “Guiding Principles” (1995) and the ProgramEvaluation Standards (Joint Committee on Standards for Educational Evaluation,1994). Fifth and finally, we have provided three brain-storming exercises to tie your newunderstandings together into a scaffolded mental model. As you practice evaluation,read evaluation research, and talk to evaluation practitioners, this scaffolding willeventually disappear, and you will be able to use the mental model for practice andresearch.

Page 20: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 20 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

Exercises

Pulling Your Understandings Together

In evaluation workshops conducted for the National Association of SchoolPsychologists, Altschuld and Ronning (2000) developed a brainstorming activity thatfits well with the content of this chapter. To do this exercise, pretend that you are inthe specialized area of school psychology or modify the activity in accord with thepsychological work context that has most meaning for you. The activity consists of threebrainstorming or idea-generating parts and then examples of responses to them. Do notpeek ahead, but rather try your hand at the activity before glancing at our ideas aboutpossible responses. We believe that the activity constitutes an interesting, different, andgood way to summarize the concepts contained in this chapter.

Spend a minute or two thinking about the possibilities for school psychologists tobecome involved in program evaluation (or substitute for school psychologists yourparticular interest in psychology). Additional material on evaluation exercises andactivities may be found in Mertens (1989).

For example, what decisions are various stakeholders in your district facing? Are therespecific paperwork demands, time limits on referrals, or accountability reports? Do youwork under local guidelines and restrictions, the values of which are questionable? Areyour methods, results, and roles being scrutinized or changed erratically?

First, fill in the blanks from your experiences. These responses may vary widely as eachperson works in, or will work in, different settings under different rules and regulationsand with different administrations and colleagues.

[p. 86 ↓ ] Step 1: List up to 10 reasons why you might be called upon or wish to conductan evaluation. Reflect back on what was presented in the chapter, what evaluators do,and the various types of evaluations. There are no wrong or right answers. Rather thepurpose here is to encourage you to think about all of the possibilities for conducting anevaluation. Later, when you examine the reasons we supplied, please note that they

Page 21: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 21 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

are not definitive answers but come from a brainstorming activity very similar to what weasked you to do.

[p. 87 ↓ ]

Step 2: In the spaces provided below, identify as many programs as you can in whichevaluations could be conducted. What programs are you or your colleagues currentlyinvolved with that may be appropriate for using evaluation techniques to determine theirworth or merit? For example, school psychologists in private practice may link closelywith community agencies and schools to provide mental health services. Is this moreeffective than having the school psychologist assigned to specific buildings provide thisservice? Or, you may be responsible for grief counseling of children in cooperation withthe local hospitals. List program titles in the appropriate space, or use a short phrase todescribe the program. List as many as you can.

Program Titles or Descriptive Phrases

Page 22: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 22 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

Step 3. Speculate about the types of activities evaluators might do in the course of anevaluation and briefly describe them below.

Sampling of Responses to Brainstorming Activity

Step 1. List reasons why you might be called upon or wish to conduct an evaluation.

[p. 88 ↓ ] Step 2: What programs are you or your colleagues currently involved with thatmay be appropriate for using evaluation techniques to determine their worth or merit?

• 1. Community involvement programs• 2. Districtwide proficiency interventions• 3. Reading evaluation (primary)• 4. Provision of school psychological services

Page 23: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 23 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

• 5. Therapy outcomes in conjunction with• 6. Career planning or experience program• 7. Reporting test results to parents, etc.• 8. Parenting programs• 9. Special education programs• 10. Multihandicapped programs• 11. Community outreach programs• 12. Efficacy of multiage classrooms• 13. School-based provision of health benefits• 14. Head Start• 15. All day, everyday kindergarten other providers (e.g., AA)• 16. Preschool programs• 17. Districtwide testing program• 18. Statewide testing program• 19. Effectiveness of research efforts• 20. Effectiveness of local efforts to serve

Step 3. What types of activities do program evaluators do in the course of anevaluation?

[p. 89 ↓ ]

Page 24: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 24 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

Recommended Readings

The field of evaluation includes educational researchers and psychological researchers.Among the resources for further understanding are the following: Witkin and Altschuld(1995); the Encyclopedia of Evaluation (Mathison, 2004); and Rossi et al. (2004).Bickman and Rog (1998) provide an edited collection of applied research methodsrelevant to evaluation; Chelimsky and Shadish (1997) provide an edited collectionof forward-looking chapters on evaluation topics. Caron (1993) presents a Canadianperspective on the body of knowledge for evaluation practice; Mark, Henry, and Julnes(1999) articulated a framework for evaluation practice. The 1987 program evaluationtool kit from Sage is another valuable resource. The evaluation tool kit at the Web site ofthe W. K. Kellogg Foundation is exceptionally valuable as a distance resource (it can beaccessed at http://www.wkkf.org/).

References

Altschuld, J. W. Developing an evaluation program: Challenges in the teachingof evaluation . Evaluation and Program Planning 18(3) (1995). 259–265. http://dx.doi.org/10.1016/S0149-7189%2895%2900014-3

Altschuld, J. W. The preparation of professional evaluators: Past tense and futureperfect . Japanese Journal of Evaluation Studies 2(1) (2002). 1–9.

Altschuld, J. W.Engle, M.Cullen, CKim, I.Macce, B. R. The 1994 directory of evaluationtraining programs . New Directions in Program Evaluation 62 (1994). 71–94. http://dx.doi.org/10.1002/ev.1678

Altschuld, J. W.Kumar, D. D. Program evaluation in science education: The modelperspective . New Directions for Program Evaluation 65 (1995). 5–17. http://dx.doi.org/10.1002/ev.1699

Page 25: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 25 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

Altschuld, J. W., & Kumar, D. D. (2002). Evaluation of science and technologyeducation at the dawn of a new millennium . New York: Kluwer/Plenum. http://dx.doi.org/10.1007/0-306-47560-X

Altschuld, J. W., & Ronning, M. (2000). Program evaluation: Planning a programevaluation . Invited Workshop presented at 32nd annual convention of the NationalAssociation of School Psychologists, New Orleans, LA.

Altschuld, J. W., & Witkin, B. R. (2000). From needs assessment to action:Transforming needs into solution strategies . Thousand Oaks, CA: Sage.

Altschuld, J. WYoon, J. S.Cullen, C. The utilization of needs assessmentresults . Evaluation and Program Planning 16 (1993). 279–285. http://dx.doi.org/10.1016/0149-7189%2893%2990040-F

American Evaluation Association . Guiding principles for evaluators . New Directions forProgram Evaluation 66 (1995). 19–26.

Bickman, L., ed. , & Rog, D. (Eds.). (1998). Handbook of applied social research .Thousand Oaks, CA: Sage.

Campbell, D. T, & Stanley, J. C. (1966). Experimental and quasi-experimental designsfor research . Chicago: Rand McNally.

Caracelli, V. J. Structured conceptualization: A framework for interpretingevaluation results . Evaluation and Program Planning 12 (1989). 45–52. http://dx.doi.org/10.1016/0149-7189%2889%2990021-9

Caron, D. J. Knowledge required to perform the duties of an evaluator . CanadianJournal of Program Evaluation 8 (1993). 59–79.

Chelimsky, E., ed. , & Shadish, W. R. (Eds.). (1997). Evaluation for the 21st century: Ahandbook . Thousand Oaks, CA: Sage.

Chen, H-T. (1990). Theory-driven evaluations . Newbury Park, CA: Sage.

Page 26: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 26 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

Cook, T. D., & Campbell, D. T. (1979). Quasi-exper-imentation: Design and analysisissues for field settings . Chicago: Rand McNally.

Davidson, E. J. The discipline of evaluation: A helicopter tour for I/O psychologists .Industrial/Organizational Psychologist 40(2)(2002, October). 31–35.

Edwards, J. E., ed. , Scott, J. C, ed. , & Raju, N. S. (Eds.). (2003). The humanresources program evaluation handbook . Thousand Oaks, CA: Sage. http://dx.doi.org/10.4135/9781412986199

Engle, M.Altschuld, J. W An update on university-based training . Evaluation Exchange9(4)(2003/2004). 13.

Engle, M., Altschuld, J. W., & Kim, Y. C. (2005). Emerging dimensions of university-based evaluator preparation programs . Manuscript in progress. Corvallis: Oregon StateUniversity.

Fitzpatrick, J. L. The politics of evaluation: Who is the audience? Evaluation Review13(6) (1989). 563–578. http://dx.doi.org/10.1177/0193841X8901300601

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2003). Program evaluation:Alternative approaches and practical guidelines (3rd ed.) . Boston: Allyn & Bacon.

Greene, J. C, ed. , & Caracelli, V. J. (Eds.). (1997). Advances in mixed-methodevaluation: The challenges and benefits of integrating diverse paradigms . SanFrancisco: Jossey-Bass.

Holton, E. F., IIIBates, R. A.Naquin, S. S. Large-scale performance-driven trainingneeds assessment: A case study . Public Personnel Management 29 (2000). 249–268.

Joint Committee on Standards for Educational Evaluation . (1994). The programevaluation standards (2nd ed.) . Thousand Oaks, CA: Sage.

King, J. A.Stevahn, L.Ghere, G.Minnema, J. Toward a taxonomy of essential evaluatorcompetencies . American Journal of Evaluation 22 (2001). 229–247.

Page 27: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 27 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

Kirkpatrick, D. L. Techniques for evaluating training programs . Journal of the AmericanSociety for Training and Development 13 (1959). 3–32.

Kirkpatrick, D. L. (1998). Evaluating training programs: The four levels (2nd ed.) . SanFrancisco: Berrett-Koehler.

Leviton, L. CHughes, E. F. X. Research on the utilization of evaluations:A review and synthesis . Evaluation Review 5 (1981). 525–548. http://dx.doi.org/10.1177/0193841X8100500405

Mark, M. M.Henry, G. TJulnes, G. Toward an integrative framework for evaluationpractice . American Journal of Evaluation 20 (1999). 193–212.

Mark, M. M., ed. , & Shotland, R. L. (Eds.). (1987). Multiple methods in programevaluation (New Directions in Program Evaluation, Vol. 35) . San Francisco: Jossey-Bass.

Mathison, S. (Ed.). (2004). Encyclopedia of evaluation . Thousand Oaks, CA: Sage.http://dx.doi.org/10.4135/9781412950558

McGuire, M. (2002, October). Canadian Evaluation Society project in support ofadvocacy and professional development: Literature review . Toronto: Zorzi.

McKillip, J. (1998). Needs analysis: Processes and techniques . In L. Bickman, ed. & D.Rog (Eds.), Handbook of applied social research (pp. 261–284) . Thousand Oaks, CA:Sage.

Mertens, D. Training evaluators: Unique skills and knowledge . New Directions forProgram Evaluation 62 (1994). 17–27. http://dx.doi.org/10.1002/ev.1673

Mertens, D. M. (1989). Creative ideas for teaching evaluation: Activities, assignments,and resources (Evaluation in Education and Human Services Series) . New York:Kluwer.

Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematicapproach (7th ed.) . Thousand Oaks, CA: Sage.

Page 28: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 28 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

Scriven, M. (1991). Evaluation thesaurus (4th ed.) . Newbury Park, CA: Sage.

Scriven, M. The final synthesis . Evaluation Practice 15 (1994). 367–382. http://dx.doi.org/10.1016/0886-1633%2894%2990031-0

Scriven, M. Types of evaluation and evaluators . Evaluation Practice 17 (1996). 151–161. http://dx.doi.org/10.1016/S0886-1633%2896%2990020-3

Stevahn, L.King, J. A.Ghere, G.Minnema, J. Establishing essential competenciesfor program evaluators . American Journal of Evaluation 26(1) (2005). 43–59. http://dx.doi.org/10.1177/1098214004273180

Stufflebeam, D. L. The relevance of the CIPP evaluation model for educationalaccountability . Journal of Research and Development in Education 5(1) (1971). 19–25.

Stufflebeam, D. L. An EEPA interview with Daniel L. Stufflebeam . EducationalEvaluation and Policy Analysis 4 (1980). 85–90.

Stufflebeam, D. L. (2002). The CIPP model for evaluation . In T. Kellaghan, ed. & D.L. Stufflebeam (Eds.), International handbook of educational evaluation . Dordrecht:Kluwer.

Suchman, E. A. (1967). Evaluative research: Principles and practice in public service ofsocial action programs . New York: Russell Sage Foundation.

Tashakkori, A., ed. , & Teddlie, C. (Eds.). (2002). Handbook of mixed methods for thesocial and behavioral sciences . Thousand Oaks, CA: Sage.

Wholey, J. S., ed. , Hatry, H. P., ed. , & Newcomer, K. E. (Eds.). (1994). Handbook ofaccreditation standards for practical program evaluation . San Francisco: Jossey-Bass.

Witkin, B. R., & Altschuld, J. W. (1995). Planning and conducting needs assessments: Apractical guide . Thousand Oaks, CA: Sage.

Worthen, B. R., & Sanders, J. R. (Compilers). (1973). Educational evaluation: Theoryand practice . Worthington, OH: C. A. Jones.

Page 29: Altschuld, Austin - 2006 - Program Evaluation Concepts and Perspectives

SK Marketing

©2006 SAGE Publications, Inc. All Rights Reserved. SAGE knowledge

Page 29 of 29 The Psychology Research Handbook: A Guidefor Graduate Students and Research Assistants:Program Evaluation: Concepts and Perspectives

Worthen, B. R., & Sanders, J. R. (1987). Educational evaluation . New York: Longman.

Worthen, B. R., Sanders, J. R., & Fitzpatrick, J. L. (1997). Program evaluation:Alternative approaches and guidelines (2nd ed.) . New York: Longman.

http://dx.doi.org/10.4135/9781412976626.n5