excellence in curriculum development and … journal of pharmaceutical education 2003; 67 (3)...

22
American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. SPECIAL ARTICLES Excellence in Curriculum Development and Assessment Marie A. Abate, PharmD, a Mary K. Stamatakis, PharmD, a and Rosemary R. Haggett, PhD b a School of Pharmacy, West Virginia University b Office of the Provost, West Virginia University Pharmacy and other health sciences educators have often faced curriculum and assessment issues. However, as expectations for accountability and assessment have continued to grow for higher educa- tion institutions, there is increasing concern about the development of assessment plans and the ap- propriate use of assessment data. A variety of approaches have been used for development of both curriculum and program assessment plans. Although there is no single correct method for accomplish- ing either task, there are important principles, concepts, characteristics, and approaches that should be considered, such as beginning with well-defined student learning outcomes, using educational ap- proaches designed to facilitate student achievement of those outcomes, and designing assessment strategies that target the specific outcomes. Faculty at schools and colleges of pharmacy need to un- derstand educational concepts and theories, the principles/characteristics of effective assessment pro- grams, obstacles to assessment plan development and ways to minimize them, and methods to create an environment conducive to curriculum and assessment efforts. They should also consider their own unique circumstances when undertaking curriculum modifications and preparing/implementing a comprehensive assessment plan. Professional associations and accrediting agencies can also fill an important role by assisting schools and colleges in their efforts to improve student learning. Keywords: curriculum, assessment, excellence INTRODUCTION Institutions of higher education strive to be recog- nized for their commitment to providing effective, high quality educational programs, thus fostering aca- demic excellence in both faculty and students. Stu- dents and their parents demand high quality programs and use “quality” as a metric in making the decision about which college to attend. Faculty want to be part of a program with established excellence, knowing that this will enhance their reputation and career de- velopment. The public also seeks measures of quality, whether real or imagined, and expects academic insti- tutions to be of high quality. Pharmacy education has undergone major change over the past decade with the approval of the entry-level Doctor of Pharmacy (PharmD) degree program. The American Council on Pharmaceutical Education (ACPE) developed new stan- dards for the professional PharmD degree program that were adopted in June 1997 and became effective July 1, 2000 (ie, Standards 2000). These standards and their associated guidelines are designed to assist pharmacy education institutions develop and maintain academi- cally strong, effective programs that are responsive to changing health care needs. Since quality and excellence in education are impor- tant to all aspects of society, focus has been placed upon curricula and assessment strategies to assure that pro- grams are accomplishing their missions. An overview is provided of selected aspects of these topics that are of particular interest and concern to pharmacy education, along with additional recommended readings. Areas discussed include higher education and pharmacy edu- cation effectiveness, curriculum development (student learning outcomes, instructional methods – concepts, approaches, student learning evaluation, curriculum mapping), and program assessment (principles and char- acteristics, barriers and challenges, approaches and methods). Recommendations to enhance progress in these areas are provided for consideration by Corresponding Author: Marie A. Abate, PharmD. Mailing Address: School of Pharmacy, West Virginia University, 1124 Health Sciences North, Morgantown, WV 26506-9520. Tel: (304) 293-1463. Fax: (304) 293- 7672. E-mail: [email protected]. 1

Upload: phungngoc

Post on 19-Mar-2018

216 views

Category:

Documents


2 download

TRANSCRIPT

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89.

SPECIAL ARTICLES Excellence in Curriculum Development and Assessment Marie A. Abate, PharmD,a Mary K. Stamatakis, PharmD,a and Rosemary R. Haggett, PhDb aSchool of Pharmacy, West Virginia University bOffice of the Provost, West Virginia University

Pharmacy and other health sciences educators have often faced curriculum and assessment issues. However, as expectations for accountability and assessment have continued to grow for higher educa-tion institutions, there is increasing concern about the development of assessment plans and the ap-propriate use of assessment data. A variety of approaches have been used for development of both curriculum and program assessment plans. Although there is no single correct method for accomplish-ing either task, there are important principles, concepts, characteristics, and approaches that should be considered, such as beginning with well-defined student learning outcomes, using educational ap-proaches designed to facilitate student achievement of those outcomes, and designing assessment strategies that target the specific outcomes. Faculty at schools and colleges of pharmacy need to un-derstand educational concepts and theories, the principles/characteristics of effective assessment pro-grams, obstacles to assessment plan development and ways to minimize them, and methods to create an environment conducive to curriculum and assessment efforts. They should also consider their own unique circumstances when undertaking curriculum modifications and preparing/implementing a comprehensive assessment plan. Professional associations and accrediting agencies can also fill an important role by assisting schools and colleges in their efforts to improve student learning. Keywords: curriculum, assessment, excellence

INTRODUCTION Institutions of higher education strive to be recog-nized for their commitment to providing effective, high quality educational programs, thus fostering aca-demic excellence in both faculty and students. Stu-dents and their parents demand high quality programs and use “quality” as a metric in making the decision about which college to attend. Faculty want to be part of a program with established excellence, knowing that this will enhance their reputation and career de-velopment. The public also seeks measures of quality, whether real or imagined, and expects academic insti-tutions to be of high quality. Pharmacy education has undergone major change over the past decade with the approval of the entry-level Doctor of Pharmacy (PharmD) degree program. The American Council on

Pharmaceutical Education (ACPE) developed new stan-dards for the professional PharmD degree program that were adopted in June 1997 and became effective July 1, 2000 (ie, Standards 2000). These standards and their associated guidelines are designed to assist pharmacy education institutions develop and maintain academi-cally strong, effective programs that are responsive to changing health care needs.

Since quality and excellence in education are impor-tant to all aspects of society, focus has been placed upon curricula and assessment strategies to assure that pro-grams are accomplishing their missions. An overview is provided of selected aspects of these topics that are of particular interest and concern to pharmacy education, along with additional recommended readings. Areas discussed include higher education and pharmacy edu-cation effectiveness, curriculum development (student learning outcomes, instructional methods – concepts, approaches, student learning evaluation, curriculum mapping), and program assessment (principles and char-acteristics, barriers and challenges, approaches and methods). Recommendations to enhance progress in these areas are provided for consideration by

Corresponding Author: Marie A. Abate, PharmD.Mailing Address: School of Pharmacy, West VirginiaUniversity, 1124 Health Sciences North, Morgantown,WV 26506-9520. Tel: (304) 293-1463. Fax: (304) 293-7672. E-mail: [email protected].

1

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. For example, federal legislators are likely to demand

more “accountability” from colleges and universities as part of the process of reauthorizing the Higher Educa-tion Act. Republicans in Congress have proposed that federal financial aid would be denied to colleges whose completion rates do not measure up to a certain stan-dard. Similarly, the major lobbying group representing for-profit colleges, the Career College Association (CCA), is asking Congress to oblige colleges to publish annual “report cards” that would measure success in retaining and graduating students and in preparing stu-dents for life after college.3 Items on the report card would include: success of graduates in obtaining jobs, performance of graduates on licensing or certification exams, and alumni and employers’ satisfaction. Interest-ingly, student learning per se is not a focus of this pro-posal.

schools/colleges, the American Association of Col-leges of Pharmacy (AACP), and ACPE.

A lack of literature consensus, as well as consid-erable confusion, exists for definitions of many as-sessment related terms.1 A glossary is provided (Appendix 1) for several of these terms and includes other terms used similarly in the literature. The defini-tions were selected based upon whether they reflected those used in the majority of published literature or concepts that most literature sources appeared to agree upon. It should be kept in mind that much of the published program assessment information comes from the field of medicine. Since pharmacy and medi-cine education share many similarities with other health-related fields, the health care disciplines are urged to work together to adopt and use common ter-minology for the same assessment-related descrip-tions. Colleges are already accountable to a number of en-

tities: accreditation authorities, state governments, and the Department of Education. For some, the real issue is not about being accountable but about the performance of graduates in the state and national economies. Some see accountability as a way to leverage institutional change. The question really being asked is, “What value did the students receive for the education they just paid for?”

HIGHER EDUCATION EFFECTIVENESS Mechanisms are in place to judge and certify the

quality and effectiveness of higher education institu-tions. Most academic institutions are accredited by organizations called regional accreditation agencies, such as the Higher Learning Commission (HLC), a commission of the North Central Association of Col-leges and Schools. Regional accreditation qualifies an institution to receive federal financial aid, and it is a prerequisite in order for a given degree program at an institution to be accredited by a professional organiza-tion. In the case of pharmacy, that professional or-ganization is ACPE. Further, a variety of national rankings purport to identify the highest quality institu-tions for a given discipline. While such rankings have potential problems and limitations, these “stamps of approval” are often very important in attracting stu-dents to enroll and in their job placement after gradua-tion. Accountability, institutional program review, and higher education accreditation all play a role in determining higher education effectiveness.

At many institutions, academic programs are re-viewed on a regular basis, either through an internal process or by an external advisory or governing board. The effectiveness of academic programs has tradition-ally been judged on the basis of “inputs,” eg, number of students, faculty, physical and financial resources, vi-ability, necessity, and consistency.

The adequacy and quality of an academic program has historically been measured by the preparation and performance of its faculty and students and the ade-quacy of the physical facilities. Issues that supposedly speak to program adequacy include: the degree require-ments and significant features of the curriculum, the percentage of faculty holding tenure, the extent to which part-time faculty are used, the level of academic prepa-ration of the faculty, admission standards and entrance abilities of students as judged by results on standardized tests (ACT, SAT, TOEFL, GRE, etc) and high school or baccalaureate grade point average (GPA), and physical and financial resources.

Higher education generally receives broad public support. The National Center for Postsecondary Im-provement (NCPI) reported that in a random sample of 1,000 adults, 79% rated higher education’s per-formance as good or excellent.2 On the other hand, policy makers generally believe that colleges and uni-versities are not as effective as they could be. Institu-tional effectiveness examines the extent to which in-stitutions meet their stated mission, goals and objec-tives. It is in this context that the issue of “account-ability” is raised.

Programs are also evaluated for their viability. Viability can be defined as a program’s past ability and future prospects to attract students and sustain a worka-ble, cost effective program. Viability is tested by an analysis of the unit cost factors, the ability to sustain a critical mass, and the relative productivity of the pro-

2

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. gram. Evidence of viability is also based upon past trends in enrollment and patterns of graduates.

Another evaluative issue that can be addressed is a program’s necessity. Is the program necessary for the institution’s service region? Is the program needed by society, as judged by current employment oppor-tunities, evidence of future need, and rate of place-ment of the program’s graduates?

Consistency of a program with the institution’s mission is another factor for consideration. A program needs to be a component of and appropriately con-tributing to the fulfillment of the institution’s mission. This involves determining the centrality of the pro-gram to the institution, or how well the program com-plements or draws from the institution’s other pro-grams.

While all of the above factors have been used to judge program quality, the evaluation of a program’s quality and effectiveness has moved more recently from “input-based” to more “outcome-based” evalua-tion. Outcomes are sometimes thought to be items such as the number of graduates of a program. How-ever, when the term outcome-based is used today, it generally refers to the assessment of student learning outcomes. This approach places student learning at the center of assuring and advancing quality of higher education.

In the 1980s, A Nation at Risk focused primarily on the declining quality of primary and secondary schools but helped establish the context for a similar analysis of postsecondary education.4 It engendered the report, Involvement in Learning: Realizing the Potential of American Higher Education, that identi-fied the need for enhanced student involvement, for higher expectations, and for the assessment of student learning.5

Higher education faculty members are quite good at collecting data but less proficient at analyzing those data, especially as they pertain to learning. This is the step where evaluation, the process of reflecting upon and interpreting the collected data to determine what represents new knowledge, needs to occur. It is only through this evaluation that the final, critical step in the assessment process can be undertaken. The phrase “closing the assessment feedback loop” is used to de-scribe changes made to a curriculum based on what the faculty have concluded and learned from the as-sessment data and its evaluation. Changes in the cur-riculum and its delivery, as well as new faculty devel-opment programs, should arise from an analysis of assessment data. Institutions should work toward a

culture of assessment in which there is a willingness to not only create measures and collect data about out-comes, but to also use this information to make changes that will improve student learning.

Recently, the HLC, the regional accreditation group for higher education institutions in the 19-state North Central region, announced a new set of criteria for ac-creditation that will go into effect in 2005. Other re-gional accreditation groups such as the Middle States Association of Colleges and Schools (MSA), the New England Association of Schools and Colleges (NEASC), the Northwest Association of Schools, Colleges and Universities (NWA), and the Southern Association of Colleges and Schools (SACS) have undergone or are undergoing similar transformations. The new HLC crite-ria for accreditation are characteristic of the changes occurring in higher education. The focus shifts from what the institution has done in the past to what it is prepared to do in the future. Emphasis is placed on learning rather than on teaching, reminiscent of the rec-ommendations of the Kellogg Commission in their re-port, Returning to our Roots: Toward a Coherent Cam-pus Culture.6 The new criteria for accreditation move from inputs, resources and structures to outcomes. For example, the HLC New Criterion Three: Student Learn-ing and Effective Teaching asks the institution to pro-vide evidence that supports the following: the organiza-tion’s goals for student learning outcomes are clearly stated for each educational program and make effective assessment possible; the organization values and sup-ports effective teaching; the organization creates effec-tive learning environments; and the organization’s learn-ing resources support student learning and effective teaching.

Institutions of higher education struggle to portray the qualities of a learning organization, including the readiness to define priorities, measure progress, create feedback loops, and apply what is learned to improve performance. Despite many years of the assessment movement, few institutions systematically use assess-ment results to improve the curriculum and student learning.7 Unlike other “movements,” the assessment movement is not going to go away. As long as there are external forces calling for accountability, assessment will be an expectation. The good news is that faculty have the opportunity to “own the process” - it does not have to be “done to them.” Assessment should be used to transform the enterprise from teaching-centered and rooted in the past to one that is learning-centered with an eye to the future.

3

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89.

PHARMACY EDUCATION EFFECTIVENESS Overview

Pharmacy education has made substantial strides forward in recent years in the area of curricular devel-opment and refinement based upon the expected abili-ties of graduates, as well as in the development and initiation of assessment plans. AACP has facilitated these efforts by sponsoring curriculum and assess-ment related institutes and workshops, establishing commissions and focus groups to explore key topics, and by establishing the Center for the Advancement of Pharmaceutical Education (CAPE) Advisory Panel on Educational Outcomes. ACPE’s Standards 2000 includes several curriculum/assessment related stan-dards that should be met by pharmacy schools/colleges for accreditation purposes.8 Key as-pects of these standards include establishment and maintenance of a system to assess the extent to which the educational mission, goals and objectives are be-ing achieved, including use of formative and summa-tive indicators, evaluation of knowledge/skills appli-cation to patient care, and analysis of outcomes meas-ures for purposes of continuing development and im-provement (Standard No. 3); delineation of profes-sional competencies that should be achieved, devel-opment of outcome expectations for student perform-ance in those competencies, and inclusion of student self-assessment of performance (Standard No. 11); description of ways in which curricular content is taught and learned, including teaching efficiencies and effectiveness, innovation, curricular delivery, educational techniques/technologies integration, fos-tering of critical thinking/problem-solving skills, meeting diverse learner needs, involvement of stu-dents as active, self-directed learners, and transition-ing from dependent to independent learning (Standard No. 12); establishment of principles and methods for formative and summative evaluation of achievement using a variety of measures throughout the program, assessments that measure cognitive learning, skills mastery, communication ability, use of data in critical thinking/problem-solving, and measurement of stu-dent performance in all professional competencies (Standard No. 13); and use of systematic and sequen-tial evaluation measures throughout the curriculum, focusing on all aspects, and application of outcomes and achievement data to modify/revise the profes-sional program (Standard No. 14).

An increasing number of pharmacy literature re-ports describing curriculum revision, mapping of course content and objectives to program learning

outcomes, and assessment efforts at various schools and colleges attests to the progress made. Higher education and health professions accreditation organizations have stated the need for assessment data to document educa-tional effectiveness and make ongoing curricular changes to enhance learning. The logical questions then become: How well has pharmacy education performed with regard to developing comprehensive student learn-ing outcomes assessment plans? What are their assess-ment findings?

Two surveys examined educational outcomes as-sessment efforts at US schools/colleges of pharmacy. A 1998 survey (64% response rate) gathered data about tools used to assess/measure student abilities and com-petencies.9 The responses were categorized into five areas, including an assessment center approach, use of an objective structured clinical examination (OSCE), educational outcomes assessment surveys, clerkship outcomes assessment surveys, and a combination ap-proach, eg, surveys, NAPLEX results, experiential per-formance, etc. The most commonly used tool was the educational outcomes survey approach, followed by a combination approach, clerkship outcomes assessment, and OSCE use. It was concluded that most schools/colleges were at only the beginning stage of outcomes assessment and lacked data from use of their tools. Limitations of this survey include a lack of re-sponse from about one third of schools/colleges, no quantitative data for the number using each type of tool, and the fairly narrow survey focus.

A 2000 survey (69% response rate) obtained data from pharmacy schools/colleges regarding the persons involved with outcomes assessment, the factors that drive the process, the prevalence of formalized out-comes and assessment plans, and the instruments being used.10 Twenty-nine percent of respondents had under-gone an ACPE accreditation visit since 1998. Only 49% of respondents had an assessment committee, although the curriculum committees at some schools/colleges might have similar responsibilities. Assessment commit-tees were less likely than curriculum committees to in-volve students or practitioners. Only 11% had the equivalent of a full-time professional position assigned to an assessment role. While 71% of respondents had an approved list of general education abilities for their pro-gram, only 44% of respondents had a written outcomes assessment plan and, of these, only 65% (about 29% of respondents overall) indicated their plan was formally adopted. The extent to which assessment data were ob-tained and actions taken were not described. The dean, another administrative officer, and faculty were indi-cated as drivers (multiple drivers could be selected) of

4

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. the assessment process at 71%, 63%, and 54%, re-spectively, of those schools/colleges with a written plan. The most frequent instrument used for outcomes assessment was NAPLEX, with small numbers (< 10 each) mentioning tools such as the Comprehension Apprehension Scale and the Watson-Glaser Critical Thinking Appraisal. The survey did not provide for more detailed descriptions of the assessment-related work undertaken by school/colleges. Although several schools/colleges did not respond to this survey, a number of concerns are evident from the data. Deans or other administrative officers, who should be active proponents and facilitators of assessment planning, did not appear to be driving the assessment process at about 28% to 37% of those institutions with a written plan. Less than half of the respondents had a written plan, and less than a third had a plan formally adopted by faculty. For the 51% of institutions that lacked an assessment committee, the extent to which the cur-riculum committees are actively involved in assess-ment activities is unclear. Only a small minority of respondents had a full-time equivalent position as-signed to assessment, a time-consuming process. Heavy reliance appears to be placed on NAPLEX as an assessment instrument, although an examination of this type has limitations on the activities and skills that can be measured,11 and performance data for spe-cific individual outcomes are not provided to schools/colleges.

A number of literature reports since 2000 describe learning outcomes assessment efforts or plans at vari-ous pharmacy schools/colleges. Some focused on the assessment of specific skills or abilities such as litera-ture evaluation, critical thinking, problem solving, or writing, while others are developing models for ex-ploring learning outcomes assessment across the cur-riculum. However, only limited data from these as-sessments are reported. In conclusion, the majority of schools/colleges of pharmacy appear to be character-ized at best as being in only the early stages of estab-lishing an institutional culture of assessment and comprehensive outcomes assessment plans, with rela-tively few findings available to date. Curriculum Development

Although the term “curriculum” is used fre-quently in pharmacy education, curriculum is often defined narrowly. Webster defines curriculum as “the courses offered by an educational institution or one of its branches” or “a set of courses constituting an area of specialization.” However, curriculum also encom-passes learning experiences set forth by a program or school and should include all aspects of these experi-

ences. In addition to content, curriculum considers stu-dent learning outcomes, teaching and learning proc-esses, student evaluation, and program (student learning outcomes) assessment. Assessment is a critical step in curriculum development by not only determining if the learning outcomes of a course or program were met but also by directly influencing student learning. The Report of the Focus Group on Liberalization of the Professional Curriculum appointed by AACP defined curriculum as “an educational plan which is designed to assure that each student achieves well-defined performance-based abilities.”12 Curriculum development should be an ongo-ing process that is responsive to changes in pharmacy practice and society and that incorporates new scientific discovery.

Student learning outcomes. Development of stu-dent learning outcomes is the foundation to building curricula because learning outcomes must guide content development and selection of instructional methodolo-gies. Further, learning outcomes should be derived from the educational mission of the institution, and in the case of pharmacy education, should be congruent with clini-cal practice. The CAPE Advisory Panel on Educational Outcomes provides an excellent starting place for devel-opment of a professional pharmacy program’s learning outcomes.13 Student learning outcomes provide the stu-dent with the institutions’ expectations of them upon completion of the program of study. Using an outcomes-based approach, the focus of curriculum development is on what students will be able to do rather than what fac-ulty will do. Thus, the curriculum should be planned around student learning outcomes that link knowledge, skills and behavior/attitudes/values, rather than simply using content or subject areas as a road map for curricu-lar development.14 Once outcomes are set forth, teaching and learning strategies are then developed to support their achievement. Thus, the educational environment is created as a product of an outcomes-based curriculum.

Student learning outcomes should be explicit and measurable, enabling the institution to assess the effec-tiveness of the curriculum, and to describe to stake-holders (eg, students, faculty, administrators, pharmacy practitioners, accreditors) what the curriculum hopes to achieve. Good educational outcomes should specify five essential components: “Who/will do/how much/of what/by when?”14 For example, asking a first year PharmD student to “understand the components of a patient’s medical record” upon completion of a course may describe “who” and “by when,” but is ambiguous about what specifically and how much students should achieve. A better understanding of the outcome would be achieved if a first year PharmD student was asked to

5

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. “collect relevant information from the patient’s medi-cal record to create an accurate and complete patient profile.” Use of an action verb (eg, classify, evaluate) that describes the outcome expectation assists the stu-dent in understanding what should be accomplished. Bloom’s taxonomy categorizes cognitive levels of learning on a continuum from simple to complex (ie, knowledge, comprehension, application, analysis, synthesis, evaluation).15 Thus, student learning out-comes should be developed for the desired cognitive level of learning. In addition to professional learning outcomes, the AACP Commission to Implement Change in Pharmaceutical Education recommends development of outcomes that describe general abili-ties (eg, communication, ethics).16 A program should also have a process in place for continuously re-evaluating and modifying as indicated its student learning outcomes. The CAPE Educational Outcomes document will be undergoing review and revision in 2004.

Not only should learning outcomes be developed for a program, but they should also be developed and assessed for individual courses and lesson plans. As the first step in course development, instructors should prepare outcomes for individual courses that are in accord with the program’s student learning out-comes. Outcomes should complement or build upon those in other related or previous courses, be appro-priate for the level of the student, and be action-oriented. For each student learning outcome devel-oped for a course, specific performance criteria (ie, criteria which the instructor will use to evaluate stu-dent performance) should be developed and commu-nicated to students.17 The criteria should describe clearly what the students need to do to achieve the outcome in sufficient detail that another instructor could use the criteria and arrive at the same conclu-sion about the student’s performance. The perform-ance criteria help to describe the level of expertise required for students to achieve a given outcome, and they should be used to determine the content and in-structional methods necessary to achieve the outcome. Student learning outcomes that require higher cogni-tive levels of learning (eg, evaluation) will require different types of educational and evaluative methods compared to those that require lower cognitive levels of learning (eg, knowledge).18

Instructional Methods. Concepts. In order to se-lect and develop appropriate instructional methods, some basic conceptual knowledge is needed. What are some important basic points concerning teaching and learning? A key point is that teaching is not equal to

learning. Talking to a student audience, ie, passive learning, does not guarantee that they will understand, process, synthesize, apply and retain what is heard. Se-mantic networks consisting of a number of related con-cepts must be built in order to learn, and these knowl-edge networks change when new learning is experi-enced. The learners themselves are the center of the learning process (ie, student-centered learning or con-structivism); they structure, organize, and use new in-formation gained through interactions with their envi-ronment and need to have adequate self-study time to accomplish this. The recall and use of information is also affected by the situation or context in which learn-ing occurs. For example, it can be difficult to quickly identify a person one usually sees only at work when that individual is encountered in a non-work environ-ment.19

Learning style, the manner(s) in which an individual prefers to learn, can also affect teaching and learning effectiveness. Although learning has been said to be more effective when the teaching and learning environ-ment match the learner’s style,20,21 providing “creative teaching/learning style mismatches” might actually help stimulate optimal learning.22 Some pharmacy schools/colleges reported that students preferred more than one type of learning modality or activity,23-25 and that incorporation of diverse learning activities in one course overcame individual differences in learning style.26 However, a problem with the learning style lit-erature is that many different versions of cognitive or learning style measures exist, making the individual re-sults from studies difficult to interpret and compare.21 How should pharmacy faculty address the issue of teaching and learning styles? As a first step, use of the same instrument(s) by schools/colleges to determine student learning styles would allow for better compari-son of findings across campuses. More work is needed to address questions such as the role of diversity in af-fecting pharmacy student learning and styles, whether learning style data can be used to predict student success and how this could best be accomplished, and whether (and how) to adapt teaching styles to accommodate learning styles. Based on currently available data, a va-riety of learning approaches should be considered for use in courses.2,27,28 Faculty members should develop strategies for helping students adjust their learning ap-proaches as appropriate for the specific task or situa-tion.22,27 Faculty members should also recognize that since some pharmacy students may prefer passive learn-ing methods,28,29 acclimating them to active learning approaches might take some effort.

6

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. In summary, to enhance the educational process

and employ appropriate instructional methods, teach-ers need to apply learning concepts. They should guide student learning and draw upon prior learning as well as expose students’ inconsistencies between current understanding and new learning experiences. They should engage students in active learning that allows for “construction” of their own knowledge, provide students with sufficient time in the curriculum to reflect upon and learn from new experiences, inte-grate knowledge and concepts rather than teach them in isolation, and use a variety of learning approaches in their courses. They also need to provide knowledge in a professionally meaningful manner, include differ-ent contexts and scenarios as well as work with au-thentic problems, and use assessment to drive and improve learning.11,19,30,31

Approaches. A variety of instructional approaches are needed to meet all the learning outcomes of a pro-gram. Since outcomes serve as a guide to students by describing what they should be able to do as they pro-gress through a course or a program, strategies to pro-vide students with sufficient opportunities to practice achievement of the outcomes are needed. Opportunity to practice skills should not be limited to the experien-tial year of the curriculum, but rather provided as a continuum throughout the curriculum. In addition, students should be given the opportunity to develop their problem-solving skills, integrate information from one discipline to another, and conceptualize how each piece of information relates to other materials learned previously and to pharmacy practice as a whole. Integrative teaching most likely will be neces-sary to achieve learning outcomes. However, inte-grated teaching does not necessarily mean the creation of formal integrated courses and curricula.

A trend occurring in medical schools is the im-plementation of integrated curricula.32 To a somewhat lesser extent, schools/colleges of pharmacy are also implementing integrative curricula that consist of multidisciplinary blocks of material. Examples from the pharmacy literature have described the integration of medicinal chemistry, pharmacology, pathophysiol-ogy, therapeutics, patient assessment, drug literature evaluation and/or pharmacokinetics.33-37 Sprague et al. described their experiences with a five and a half week cardiovascular module that was taught on a full-time basis and that integrated pathophysiology, phar-macology, medicinal chemistry, pharmacokinetics, therapeutics, and drug information for students in the fourth year of a six-year PharmD program.33 Students agreed that the course objectives were met and that

the flow of the materials was appropriate. Most of the citations in the pharmacy literature are available in ab-stract form only. In addition, most reports do not contain evaluative data and do not compare learning to more traditional classes. As integrative curricula become more widespread in schools/colleges of pharmacy, faculty members are encouraged to share their experiences and student outcomes by publication of their findings.

Ultimately, it is the faculty’s responsibility to de-termine the best educational strategies and methods of instruction to employ to achieve course and program outcomes. A brief synopsis of different instructional methods follows. Distance education and experiential education, including service learning, are not included in the discussion and the reader is referred to separate pa-pers by AACP.

Lecture is a common instructional method used in higher education and may be particularly beneficial for topics requiring lower cognitive levels of learning for which students are primarily recalling information or describing/explaining concepts. Advocates of the lecture point to its relatively low costs since the faculty:student ratio is low. In addition, course development costs are lower than for other methods of instruction. However, if achievement of outcomes requires higher levels of cog-nitive learning (eg, application, analysis, synthesis), lec-tures alone will likely be inadequate to meet course or program outcomes since lectures place students in a pas-sive rather than active role.

Active-learning strategies have been introduced into large-group classrooms to increase problem-solving and critical thinking skills of students by placing them in a more student-centered environment.38 In addition to serving as a source of information, faculty become fa-cilitators of learning. Examples of active-learning in-structional strategies include evaluating case studies, solving authentic patient problems, peer group teaching, role-playing, writing, and building concept maps. Diffi-culties in transitioning to an active-learning environment can be minimized or avoided by setting faculty and stu-dent expectations at the start and by providing students with many opportunities to practice and learn problem-solving techniques.

Small-group teaching, recitation sessions, and pharmacy skills laboratories have the advantage of pro-moting problem-solving skills, facilitating teamwork, and enhancing the acquisition of skills. They are particu-larly useful for students that learn by “doing” and who may not otherwise participate in the large-group teach-ing environment. In addition, small-group discussions can be multidisciplinary in nature to increase collabora-tions among different health care professionals (eg,

7

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. pharmacy, nursing, medicine, dentistry). The primary disadvantages to small-group teaching are that addi-tional faculty resources (eg, facilitators, moderators) are needed, differences in learning may occur depend-ent upon differences in the facilitators’ knowledge and participation style, and unreliable individual stu-dent evaluations can exist secondary to multiple evaluators.

Problem-based learning (PBL) is an instructional technique used to promote meaningful learning and problem-solving skills. Although the definition of PBL has varied, it is an educational method focusing on the acquisition, synthesis and appraisal of knowl-edge by actively working through problems in small groups using facilitated and self-directed learning.39 Cisneros et al. provide an excellent overview of prob-lem-based learning research in pharmacy and medical education.40 Some studies show that student learning is at least equivalent to learning by traditional instruc-tional methods, and student participants report an im-provement in their problem-solving skills, use of in-formation resources, and communication/interaction skills.41 In addition, PBL may facilitate opportunities for interdisciplinary learning.40 However, although there are numerous reports of incorporation of PBL into the pharmacy curriculum, most of the reports are descriptive in nature and few document the impact of PBL on student learning and problem-solving skills. Cisneros et al. note the need for “more long term as-sessments of the effects of PBL on student learn-ing.”40 A primary disadvantage of PBL is that addi-tional resources (eg, facilitator time) is needed. Other disadvantages include inconsistencies between facili-tators and less student exposure to a broad range of content areas.

A variety of technologic tools are being used in pharmacy education, including computer-assisted in-struction, web-based course develop-ment/management software, audio/video tapes, and personal digital assistants (PDAs). Although little evaluative information is available in the literature about the use of PDAs in health sciences education, they are being used to collect patient information, ac-cess the medical literature, document clinical inter-ventions, complete quizzes, and manage lecture and course material.42,43 Zarotsky and Jaresko provided an excellent review of the use of technology in pharmacy education and highlighted the limited data that sup-port improved learning with the use of computer-assisted instruction.44 Studies should continue to be undertaken to determine how and when to optimally

incorporate technology into educational experiences and whether learning is improved with its use.

In conclusion, selecting appropriate instructional methods requires skill and expertise in the area of educa-tion. Thus, faculty should approach their role as educators in the same manner as they do their roles as clinicians or researchers.19,45 As clinicians or researchers, faculty gener-ally strive to remain current with the literature in their field, consider new approaches to enhancing and improving their work, seek peer review or feedback about the quality of their work, and take care that new graduates are suffi-ciently prepared to enter a career as a researcher or clini-cian. Often, schools/colleges provide “release time” from teaching or committee responsibilities to allow new faculty members to start their research laboratory or to develop their practice site. In contrast, as educators, faculty in the health sciences disciplines often do not receive specific education or training concerning instructional approaches, learning theories, or how to best facilitate student learning. They may not have sufficient time, given other demands, to explore and learn relevant educational theories, con-cepts, and the advantages/disadvantages of various instruc-tional methods on their own. As a result, they may nega-tively view critiques of their teaching or suggestions for change as an attack on their individual status as a faculty member. As educators, faculty must not be satisfied with simply the use of an “acceptable educational approach” but rather should continually ask whether their teaching is ef-fective and if their teaching/educational strategies can be improved.19 Drawing on an evidence-based approach to pharmacy education is one method that might allow for enhancement of teaching and learning,46 although more work is needed to determine the extent to which various educational techniques and methods can be transferred reliably from one environment to another. Faculty should take advantage of available development opportunities in the area of education through attendance at relevant na-tional meetings and workshops, or perhaps through use of professional development or leave time that colleges, schools, or universities might offer. For example, the Edu-cation Scholar modules, available through AACP, describe educational approaches and assessment and could poten-tially be completed as a professional development activity using leave time.

Student learning evaluation. Evaluation of student learning should include formative individual feedback that provides students with the opportunities to practice attaining outcomes and to learn from mistakes. Summa-tive individual feedback is intended to judge or verify performance. Formative evaluations may be perceived by faculty members as time consuming and unnecessary because they are not part of an overall course grade.

8

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. However, formative feedback provides valuable in-formation to the student in terms of their learning, areas of weakness, and ultimately their ability to suc-cessfully achieve the course’s learning outcomes. Self-evaluations can also provide students with an opportunity to monitor their own progress and can provide faculty with valuable insight into student strengths and weaknesses.

Evaluative data can be obtained through a variety of means (described in more detail later), including rating forms, self-evaluation forms, oral or written examinations, assignments, papers, questionnaires, interviews, role-playing exercises, simulations, OSCEs, portfolios, and direct observation in practice. Student evaluation data can be used in assessment plans to modify course content and educational meth-ods employed, based on what the techniques reveal that students have learned compared to what they should have learned. Student evaluations should serve as the link between a course’s learning outcomes and instructional methods. When assessment strategies are employed to determine students’ achievement of learning outcomes across courses and the entire pro-gram, the resulting data should be used to make changes within and across courses and the curriculum.

Curriculum mapping. Due to the scope and breadth of a curriculum, particularly as revisions and modifications are made over time to individual courses and sections, it can be easy to lose sight of its structure as a whole. Curriculum mapping is one technique used to diagrammatically demonstrate the relationships or links between different aspects of the curriculum: content, learning outcomes, learning re-sources, educational strategies, student assessment, etc.47 Curriculum mapping can help ensure that there are no gaps or unnecessary redundancies in content, promote an integrated curriculum by showing the re-lationship between different content areas, and iden-tify the types and range of assessment methods being used.48-50 Curriculum mapping can also be used to identify additional assessment opportunities that can be incorporated into a program assessment plan.5

Program Assessment

Principles and characteristics. A program (stu-dent learning outcomes) assessment plan should ulti-mately allow faculty to make informed changes in the curriculum to improve student performance. Assess-ment data collected should be sufficiently precise to not only provide evidence of a need for improvement

but also to indicate the specific steps that can be taken to make the improvement.

Surveys are often used as an indicator of student learning. Students might be asked to indicate the extent to which a program actually addressed its learning out-comes and their perception of the degree to which they have achieved or can perform each of the outcomes. Graduate surveys are used to obtain the opinions of graduates about a program’s learning outcomes and ar-eas of strength and weakness in their learning. Employer surveys are used to gather information about employers’ satisfaction with the performance of program graduates in the work place and ways in which graduates’ educa-tion might be improved. These are indirect measures of student learning in that they ascertain only the perceived extent or value of the learning experiences and out-comes. Other examples of indirect measures include focus group learning-related discussions and job place-ment statistics.52

While indirect measures of student learning provide useful information, most accreditation agencies expect that at least some of the assessment methods will use direct measures of students’ knowledge or skills based upon comparisons to measurable objectives or out-comes. Examples of direct measures include: examina-tions, presentations, performance appraisals, and portfo-lios, among several others.52

While some may refer to high course grades or low student dropout rates as an indication of program suc-cess, neither student course grades nor GPAs are reliable or adequate measures of student learning across a pro-gram of study. Grades or GPAs “tell us little about what a student has actually learned in the course” and “very little about what a student actually knows or what that student’s competencies or talents really are.”53 Grades are influenced by many factors including course grading policies, instructors’ experience and academic rank, and predominant modes of instruction. They usually encom-pass a variety of types of knowledge and skills, some of which students might master and others which they might not. Thus, it is hard to make comparisons between student grades in terms of what students actually know or can do.54

The literature clearly indicates that assessment drives learning. Student evaluation and assessment methods employed will determine how and what stu-dents learn since they will focus attention on those as-pects. For example, if acquisition of factual knowledge is predominantly assessed, students will primarily study to acquire and memorize facts. This has also been re-ferred to as a “steering effect.”55 Thus, program assess-

9

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. Table 1. Summary of Principles/Characteristics of Effective Program Assessment55-59 Principles and Characteristics

1. Integrated into culture 2. Ongoing and sustained 3. Based upon appropriate student learning outcomes 4. Reflects learning as multidimensional and integrated 5. Considers experiences leading to outcomes 6. Involves representatives from across educational community 7. Part of several practices to promote change 8. Used in reports to external stakeholders 9. Undertaken in receptive, supportive, enabling environment

10. Basis for funding/re-allocation decisions 11. Directed by competent, trustworthy individuals 12. Regularly re-evaluated

6. Assessment should involve representatives from across the educational community, including fac-ulty, staff, students, and external stakeholders such as employers.56 They should be included through-out the assessment process, including the plan-ning, implementation, review of drafts of assess-ment instruments, review and analysis of data, etc.

ment is ultimately critical to educational effective-ness. What are the principles and characteristics of effective program assessment practices? Several are identified (Table 1), including:

1. Assessment must be integrated into an institu-tion’s culture, at the highest level.56 Administra-tors must be “on-board,” meaning that they should not only advocate and promote assess-ment practices but should be knowledgeable about assessment-related issues and the value of assessment. The likelihood that faculty will ea-gerly participate in assessment processes is slim if the administration is not an active facilitator and supporter.

7. Assessment should be a part of a larger set of prac-tices to promote change, such as holding assess-ment-related faculty development sessions, having ongoing faculty discussions related to assessment and learning, and using assessment data to make curricular changes.56

8. Assessment data should be used in reports to ex-ternal stakeholders to show that a program’s goals are being met.56

2. Assessment should be ongoing with sustained commitment by all departments and faculty. This is more likely to occur when it clarifies questions or concerns that people care about and when it provides evidence in areas important for decisions.56

9. Assessment is most effective when undertaken in an environment that is receptive, supportive and enabling. This includes having strong support from central administrators, adequate resources for implementation, creation of an atmosphere of trust that data will be used for improvement and not for punitive measures, and the establishment of ave-nues for communicating results to a variety of au-diences.56

3. Assessment should be based upon clear, ex-plicit, focused, and measurable student learning outcomes for a program, which in turn should reflect the educational mission and goals.56,57

4. Assessment should reflect learning as multidi-mensional and integrated and should reveal per-formance over time. Thus, multiple methods that are carefully selected, with consideration given to their reliability and validity, should be used to assess the program’s student learning outcomes.56,57

10. Assessment findings should be used as a basis for funding decisions or reallocation of resources as indicated.57

11. Assessment efforts should be directed by persons who are competent, motivated, and trustworthy to enhance the credibility and acceptance of the find-ings,57,58 but all faculty must assume responsibility for assessment quality.58

5. Attention must be given not only to the out-comes from assessment methods but also to the experiences leading to those outcomes. The processes of teaching and curriculum develop-ment used to enhance student learning define successful outcomes assessment.56

12. Assessment plans should themselves be re-evaluated on a regular basis.55,59

As schools/colleges develop, implement, review, and refine their learning outcomes assessment plans, they

10

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89.

Table 2. Assessment-Related Barriers and Challenges

Type

Faculty-Related

Use of unclear, confusing assessment terminology Lack of education/training in assessment practices Proceeding too rapidly or at too large a scope Inefficient use of existing methods/measures Lack of appreciation of need to make changes Preference for structured, objective measures/methods Turnover in faculty or leadership

Resource-Related

Time-consuming, difficult process Complex interrelationships among input, process, and output factors Financial/budget constraints

Student-Related

Lack of understanding of purpose Inefficient, time-consuming process Unfair or unrealistic process Lack of feedback

should continually strive for incorporation of the prin-ciples and characteristics of successful assessment programs.

Regardless of whether a program has just initiated an assessment plan or whether the program has an established plan, it would be useful to be able to com-pare a program’s progress against a “standard” of some type. The Higher Learning Commission of the North Central Association (NCA) of Colleges and Schools has developed a tool, the Levels of Implemen-tation, that has several assessment-related potential uses.60 The Levels of Implementation is a matrix con-sisting of three levels (Level One = “Beginning Im-plementation of Assessment Programs;” Level Two = “Making Progress in Implementing Assessment Pro-grams;” Level Three = “Maturing Stages of Continu-ous Improvement”) and four patterns of descriptions or characteristics that are associated with each level. The information in the tool was prepared based upon the observations and comments obtained during numerous institutional site visits conducted by NCA. The descriptors/characteristics are divided into four main areas: Institutional Culture, Shared Responsibil-ity, Institutional Support, and Efficacy of Assessment. A worksheet accompanying the tool can be used for rating. For example, for the first pattern one can re-view the various descriptors/characteristics related to Institutional Culture – Collective/Shared Values that describe Level One, Level Two, and Level Three, and

use the worksheet to rate the extent to which the charac-teristics describe their own department, unit, division, or school. The Levels of Implementation tool can be used by programs to establish their baseline assessment char-acteristics as well as to measure progress, as a guide as they develop assessment plans, to determine whether all their faculty members and administrators share the same impressions about their assessment efforts, and to iden-tify and initiate specific changes needed to advance their assessment plans. Accreditation evaluation teams could also use the Levels of Implementation tool to help iden-tify important questions about assessment to ask pro-grams in order to determine the program’s progress and efforts to improve student learning. The tool might fur-ther assist evaluation teams in providing consistent ad-vice to programs about assessment.

Barriers and challenges. Implementing and estab-lishing an assessment process is not necessarily an easy task. What are some of the key barriers and challenges to assessment success that schools/colleges must con-tinually work to overcome? These can be characterized as faculty-related, resource-related, and student-learning related barriers and challenges (Table 2).

A major barrier to successful assessment implemen-tation is lack of faculty support,56 for which there are several contributing factors. The lack of literature con-sensus about assessment terminology and the use of dif-ferent terms for the same meanings are confusing at best. A shared language and concepts are essential to

11

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. ensure a clear understanding of the meaning of “assessment” and to identify appropriate methods for assessing student learning outcomes. Faculty support can also be enhanced by creating the proper culture and environment as described previously. The avail-ability of a sufficient number of faculty development opportunities and regularly scheduled discussions re-lated to assessment and its importance are important for enlisting and retaining faculty commitment.

Proceeding slowly can help establish a shared trust that negative findings will not affect a faculty member’s status or be used to punish. Beginning slowly and making the assessment process manage-able in size will decrease the likelihood of over-whelming already overburdened faculty. However, results obtained should be shared with faculty as rap-idly as possible so they can observe progress being made.56 The use of course-embedded assessment methods should also be considered when feasible to increase efficiency of the assessment process. Course-embedded assessment uses existing course tools, in-struments, or measures to generate program assess-ment data.61 This does not mean using course grades or global class examination scores for assessment purposes. Rather, a faculty member might have de-veloped a method to assess a specific ability in their course. Data from this measure could be incorporated into the assessment plan, and it could also be com-bined with data from other program areas that assess that same ability. For example, a medical literature evaluation course might use a rubric to provide a more objective assessment of students’ abilities to appropriately critique a journal article. Experiential rotations might also assess students’ literature evalua-tion skills. Findings from the rubric and the experien-tial rotations can be used together to help identify specific weaknesses in these abilities. Course-embedded methods can be used to facilitate the col-lection of assessment data for specific outcomes, pro-vided that appropriate tools, instruments, or other measures that are valid, reliable and allow for mean-ingful interpretation exist. If such measures do not exist, time and effort must be expended to create them.61

Another barrier is that pharmacy faculty might in-appropriately feel that as long as students pass state licensure examinations, their program is not in need of change and additional assessment methods are un-necessary. Licensure examinations have very struc-tured and prescribed content and context, and as such might determine at best that a graduate knows how. They do not adequately measure desired attributes

such as professional values, ethics, judgment processes, or skills requiring interpersonal interactions.11

An additional faculty-related assessment barrier is that faculty members may prefer structured responses to unstructured ones and objectivity rather than judgment. Since authentic (ie, context that reflects actual practice) evaluations tend to be less structured with more empha-sis placed on observation and judgment, their use for assessment purposes might make some uncomfortable. Faculty members might also incorrectly believe that ob-jectivity is legally defensible and judgments are not; however, objectivity and judgments are defensible in court for determining student competency as long as the judgments are not arbitrary or capricious.11 Faculty “buy-in” will probably be an ongoing issue with regard to assessment due to periodic personnel turnover, which can be compounded further by any changes in the pro-gram’s leadership. To help minimize “buy-in” problems, efforts should be made to determine the faculty mem-bers’ goals with respect to student learning and incorpo-rate them into the school’s culture.

There are several resource-related barriers in estab-lishing an assessment program. Even with the best of intentions, the process of defining clear, specific criteria for the cognition, skills and behaviors for each desired learning outcome can be time-consuming and diffi-cult.17,55 Identifying the most appropriate assessment approaches to use for each type of outcome, the best analyses to use for the data, and how to assess and inter-pret the results can be a daunting task, especially for most pharmacy faculty that lack background and educa-tion/training in these areas. Thus, adequate resources are needed to develop, implement, and maintain a sound program assessment plan. School/colleges located at institutions that have a school/college of education might call upon their education colleagues for assess-ment-related assistance and advice as needed.

A variety of other factors beyond learning outcomes and the curriculum per se also contribute to educational effectiveness. The interrelationships among input (eg, selection of students, budget, quality of faculty and graduate students who teach, physical resources, etc.), process (eg, goals/objectives, educational approaches, curriculum organization, course content, counseling, etc), and output (eg, drop-out rate, employment statis-tics, actual graduate’s abilities and values, etc) factors are complex, and one factor is unlikely to completely explain another.55 They still need to be considered, though, when assessing program effectiveness. Collect-ing and interpreting data related to these factors involve resource considerations (eg, time, personnel, etc).

12

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. Table 3. Questions to Address in Program Assessment Plan55,62 Questions

1. Who will participate in the actual development and periodic review of the plan? 2. Who will participate in each of the various aspects of the plan? 3. Do they need any specific training or developmental assistance? 4. Which outcomes will be assessed? 5. What will be the timeframe for each step of the process, including when will the outcomes be assessed,

in what order, and how frequently will data be obtained for each? 6. What assessment formats and methods will be used, and for which outcome(s)? 7. What are the criteria representing program “success” for each outcome measure? 8. How will assessment data be analyzed and who will perform the analyses? 9. Who will receive and use the results, and how will this communication be accomplished?

10. How will the results best be used to implement changes? 11. How will the plan be administered – who will be responsible? Are funding or other resources needed

and if so, from where? Who will enter and store data? How will the data be secured? 12. How will the assessment plan be reviewed and modified on an ongoing basis (utility, feasibility, propri-

ety, accuracy)? Further, an institution’s need to implement budget cuts might result in assessment efforts or their suppor-tive personnel being targeted for elimination, reduc-tion, or reassignment (in the case of personnel) if as-sessment is not sufficiently valued. Budgetary short-falls reinforce the need to use sound assessment data to make curriculum changes.

Students might also represent a barrier to assess-ment activities. They can be resistant to or suspicious of assessment efforts if they do not understand their purpose, ie, to ultimately improve the educational process and their learning, if they are not sufficiently involved in assessment implementation and result in-terpretation,56 if the process is inefficient and requires a great deal of time, if it is unfair, or if it is unrealis-tic.62 Assessment processes should allow students to receive feedback about their performance to not only increase their appreciation of the role of assessment but to enhance their learning.31,59

Assessment related barriers and challenges are not unique to health sciences education. Many academic disciplines struggle to develop learning outcomes and assessment plans, for reasons similar to and in some instances different than the health sciences. For ex-ample, creative arts faculty can have difficulty devel-oping learning outcomes and performance criteria due to their desire to foster individual student expression. Faculty members in the liberal arts, in which all ma-jors do not complete the same coursework for gradua-tion and where majors and non-majors are often en-rolled in the same courses, can find it difficult to de-velop learning outcomes common to all majors and assessment strategies geared only to majors. Faculty members in some disciplines (including pharmacy) in which graduates are readily employed may not be eas-

ily convinced that assessment is necessary in their pro-gram (the “if it’s not broke, don’t fix it” philosophy). Despite the barriers and challenges, assessment efforts can be successful with careful planning, enthusiastic and accepting faculty, strong dedicated leaders, and appro-priate support and resources.

Approaches and methods. Questions to address. How should a comprehensive program assessment plan be prepared? Consulting a variety of existing resources can be useful initially in this regard. Boyce developed a comprehensive document that contains much informa-tion about assessment plan development and operation that individual institutions can use.63 Trent recently pub-lished a nice overview of learning outcomes assessment planning as applied to the health sciences.55 In addition, Winslade published a summary of a system intended to be used as part of an institution’s assessment plan, based upon comprehensive analysis of the health sciences as-sessment literature with an emphasis on medicine.62

Following completion of the necessary first step, development of student learning outcomes, the next step in preparing an assessment plan involves considering the answers to several questions.55,62 These questions are summarized in Table 3. When answering these ques-tions and developing a plan, consider the principles and characteristics of effective assessment strategies, as well as their barriers/challenges and techniques to minimize them.

Questions one through five in Table 3 are very im-portant considerations; remember to start and proceed slowly to keep the process manageable. Question six in Table 3, selection of appropriate assessment formats and methods, represents a key, but often difficult, aspect of the program assessment plan. Addressing this question requires identifying the specific learning outcome(s) to

13

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. Since observational ratings are used extensively dur-

ing experiential rotations, pharmacy faculty members should be aware of the potential problems that exist with these assessments even when student learning outcomes are well-defined. The problems include validity and re-liability concerns, potentially limited direct observation of certain skills during relatively short rotations, desired outcomes that might not be adequately discriminated by the ratings, susceptibility to a “halo effect” in which subsequent ratings are skewed by either a very good or bad performance on a previous item or task, ratings that are unduly influenced by students’ communication skills and interpersonal relationships, and a possible time lag between when observations occur and feedback is pro-vided.11,17,55,62,64,67 Approaches to improve observational ratings as an assessment format include providing rater training and sufficient feedback to preceptors about their ratings,17 ensuring that rating forms are concise,62 incor-porating a minimum number of documented observa-tions by preceptors of certain activities,62 the use of en-counter cards (that take only a minute or so to complete) designed to provide feedback to students each time a patient encounter is observed,67 and the supplementation of observational ratings by more objective exercises, tests, etc.68

be assessed, linking or matching an assessment for-mat(s) (eg, written, verbal, simulations, authentic or performance assessments, projects) to the abilities and tasks represented by each outcome, and considering the method(s) for accomplishing this. Format selec-tion should include consideration of validity (particu-larly the extent to which the format can predict future real-life practice), reliability (not only objectivity, ie, inter-rater reliability, but also generalizability or global reliability), educational impact (ie, how the assessment method will influence learning), feasibil-ity (eg, efficiency, cost, resources, etc), acceptability (students, faculty, external stakeholders), and applica-bility across programs (eg, benchmarking). It is im-portant to realize that it is not the format per se that determines the level of competency being assessed, ie, knows, knows how, shows how, does, but rather the nature of the specific questions themselves included within that format.62

Assessment formats. Written assessment formats include essays, true/false, multiple-choice questions (of varying forms), short answer and modified essay questions, extended matching items questions (ie, a list of up to 26 options with a short lead-in question or statement followed by a series of stems consisting of short case-based scenarios or vignettes for which the correct option is chosen), and key features questions (ie, a short realistic case description followed by ques-tions, multiple choice or open-ended, that focus on only essential or critical decisions). Extended match-ing items and key features questions can have advan-tages related to validity, reliability, and the types of abilities assessed as compared to standard multiple-choice questions.62,64,65

The OSCE, progress testing (ie, students in each year of the program receive the same test that reflects the knowledge and applications expected upon gradua-tion, or a mix of early stage and later stage items, multi-ple times per year), and use of an assessment center rep-resent administration methods since they could incorpo-rate a mix of assessment formats, eg, written or verbal, simulations, etc. Progress testing has generally used the true/false or multiple-choice formats, although other formats are possible. Any of these administration meth-ods can have advantages or disadvantages, depending on the specific tasks and outcomes targeted and the formats selected.55,62,64,69

Other assessment formats include simulations (eg, papers, case-based tests, computer-based simulations, some PBL activities, models, simulated patients, etc), portfolios, observational ratings, and exemplary prod-ucts [eg, documented cases, research projects or pa-pers, other tangible evidence of students’ work used to infer their ability].11,62 Techniques being explored for assessing professional competence in medicine, that could be applicable to pharmacy, include use of patient-conducted evaluations of students, portfolios containing videotapes of patient encounters, unan-nounced standardized patients in the clinical setting, and peer assessment of professionalism.66 The reader is encouraged to consult the recommended readings and references for more detailed information about the various assessment formats and their potential advantages/disadvantages.

In summary, it is impossible to maximize every ideal characteristic for the formats and methods used in an assessment plan. Thus, trade-offs exist, eg, an em-ployed method might be valid and reliable but not very feasible, or valid and feasible but not as reliable.11 As students advance through the program, multiple aspects of the profession are introduced into the curriculum which increases task complexity and its assessment. In addition, since content influences student performance, good performance on one particular case study does not reliably predict student performance on other cases in-volving different topics.19 Thus, all applicable dimen-sions should be measured by the assessment plan em-ployed, ideally progressing from use of a variety of dis-

14

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. crete, non-authentic to integrated, authentic and per-formance assessments.31 Finally, assessment plans should not focus primarily on survey use but should rather include both direct as well as indirect measures of learning.

SUMMARY Student learning represents the heart of the exis-

tence of schools and colleges of pharmacy. Outcomes for student learning must therefore be the guiding force behind the content and format of pharmacy cur-ricula. Assessment of the degree of student learning can also influence learning itself by the methods em-ployed. Thus, it is clear that student learning out-comes, the curriculum, and program assessment are not only critical, but also interrelated, components of pharmacy schools/colleges. Schools and colleges must continually refine and update as appropriate their student learning outcomes to reflect state-of-the-art practice, develop a variety of educational experi-ences to assist students in maximally achieving these outcomes, and obtain and use valid, reliable assess-ment data to make changes in the curriculum. Many published and other resources are available to assist with the development and implementation of compre-hensive program assessment plans; schools and col-leges are urged to consult these resources for addi-tional detailed information as needed.

RECOMMENDATIONS Several recommendations are provided to facili-

tate the ongoing processes of curriculum development and learning outcomes assessment. These recommen-dations are targeted to schools/colleges and their fac-ulty/administration, AACP, and ACPE. Schools and Colleges

1. Ensure, to the extent possible, that adminis-trators at all levels (provost, dean, assis-tant/associate deans, chairs) understand and communicate to others the value of assess-ment and the important relationship between assessment and learning. They should ac-tively advocate, encourage, and support solid curriculum development and assessment prac-tices in their programs.

2. Create an environment that motivates, devel-ops and supports teaching/learning advances, curricular change and sound assessment prac-tices. An institutional commitment to faculty development is necessary for advancing knowledge of educational theories, learning styles, outcomes assessment, and curricular

enhancement. Schools/colleges should send fac-ulty members to AACP Institutes as appropriate and encourage faculty to attend national meet-ings involving teaching/learning, curriculum, and assessment (eg, AACP, American Associa-tion of Higher Education [AAHE] meetings).

3. Institute a reward system for faculty members who develop innovative teaching practices to help students achieve learning outcomes. Simi-larly, appropriately reward or recognize (eg, teaching awards, consideration as part of pro-motion/tenure decisions, etc) faculty members who provide evidence that their course assess-ment methods appropriately link with desired learning outcomes and who provide students with formative feedback and the opportunity to practice desired outcomes and skills.

4. Support the scholarship of teaching/learning, and add well-designed, documented, and evalu-able assessment-related practices to the defini-tion of the scholarship of teaching/learning.

5. Schedule regular meetings/discussions pertain-ing to teaching/learning and assessment at both the departmental and school/college levels so they become part of the institutional culture.

6. Determine prospective faculty candidates’ teaching philosophy. Ask them to provide ex-amples of their teaching and any innovative teaching or evaluation methods employed. To the extent feasible/possible, hire candidates who express enthusiasm for teaching and interest in enhancing student learning.

7. Encourage new faculty members and allow them sufficient time to develop their teaching skills, to learn about key educational concepts and the education literature, and to develop de-sired learning outcomes and performance crite-ria for their teaching that are consistent with the school/college or department learning outcomes. If new faculty members are provided “release” time to develop their research or service areas, consider asking these faculty members to also use part of this time to formulate learning out-comes and educational approaches for the mate-rial they will teach.

8. Develop student learning outcomes for the pro-gram, identify the course(s) that will address each learning outcome, develop course instruc-tional strategies consistent with the outcomes, and ensure that course assessment measures are consistent with the learning outcomes.

9. Consider use of multiple instructional activities and strategies as appropriate for specific out-

15

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. comes to help accommodate and develop dif-ferent student learning styles.

10. Determine students’ baseline knowledge and skills (eg, through surveys, questionnaires, pre-tests, etc) at the beginning of individual courses and experiential rotations. This will assist faculty members in building appropri-ately on prior knowledge and facilitate the development of students’ individual knowl-edge (semantic) networks.

11. Invest necessary resources into developing solid assessment plans and measures. This should include assuring that faculty members assuming a leadership role in the program’s assessment plan have sufficient time to de-vote to these activities. Allow for a period of up to several years for the assessment plan to become fully developed and established, dur-ing which time resource needs are likely to be greater.

12. Seek assistance as needed from those with expertise in education and assessment (eg, faculty at a school/college of education if pre-sent at an institution) during assessment plan development. It is unrealistic to expect that faculty members with little to no background or training in this area and with little knowl-edge of psychometric measures will be able to develop and implement valid, comprehensive learning outcomes assessment plans without significant time, training/development, sup-port, and guidance.

13. Enlist/hire other individuals as appropriate (eg, senior students to help with grading, practitioners, residents, graduate students) to assist with assessment efforts and minimize costs (eg, faculty resources, time constraints).

14. Ask faculty members, for each of their courses or areas of teaching responsibility, to identify possible course-embedded assess-ment strategies that could provide data useful to the program’s overall assessment plan.

15. Include information/data about inputs, proc-esses, and outputs as part of the assessment plan.

16. Include both direct and indirect measures in the assessment plan.

17. Use multiple measures to assess the achieve-ment of learning outcomes. Include the as-sessment of attitudes/values, using a mix of measures as appropriate.

18. Schools/colleges should select assessment methods appropriate for their circum-

stances/needs; there is no one right method to use for obtaining student learning outcomes data. Strive to use where possible existing methods with demonstrated validity and reliabil-ity. Use external judgment, an assessment committee and/or another internal review group to help establish the reliability/validity of insti-tutionally developed tools and instruments. Supplement use of assessment methods that have lower reliability/validity with those of higher reliability/validity.

19. Examine overall and individual raters’ consis-tency of ratings during experiential rotations and examine ratings data for predictive value. En-sure that preceptors receive adequate training and appropriate feedback as raters. Consider in-corporating additional assessment methods, eg, encounter cards, portfolios, specific exercises or assignments, etc, within rotations to assist with formative as well as summative assessments.

20. Form focus groups to examine and provide in-put into any or all of the assessment-related ar-eas.

21. Establish a standing external quality assurance group comprised of members from academia and practice, with specific charges, eg, to re-view/comment upon the program’s assessment plan and processes, identify additional types of assessment data needed by the program, review and comment on assessment find-ings/conclusions and any actions taken by the program based upon the data.

22. Ensure that the results from the assessment plan are shared with all stakeholders as appropriate and are used as indicated to make curricular changes, ie, completing the loop. Expend or re-allocate resources as needed to remedy any sig-nificant student learning problems identified.

AACP

1. Work with the American Association of Medi-cal Colleges (AAMC), the American Associa-tion of Colleges of Nursing (AACN), and the American Dental Education Association (ADEA) to help develop a common assessment language and terminology across health care disciplines.

2. Provide and support mechanisms by which school/colleges can easily share with others their student learning outcomes and instruc-tional strategies, including successes and fail-ures. To help accomplish this, consider the de-velopment and maintenance of an “Outcomes,

16

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. Teaching/Learning, Assessment Resources“ section on the AACP web site. Resources provided in this section could include copies of student learning outcomes documents de-veloped by schools/colleges, brief descrip-tions of instructional strategies or learning experiences used (both successfully and un-successfully) by schools/colleges along with contact information for their faculty willing to serve as advisors/consultants to others, pro-ceedings from the annual AACP Institutes, and links to other relevant web sites.

3. Facilitate the full publication of curriculum, teaching/learning, and assessment-related ab-stracts presented at the annual meetings. Al-ternatively, recommend or require the sub-mission of a paper instead of an abstract for meeting presentation and publish the papers in a meeting program book.

4. Hold “assessment fairs” at the annual meet-ings, at which each school/college could be assigned a table/booth for sharing their as-sessment-related strategies. Interested faculty can then easily visit several schools/colleges to ask questions and gather information.

5. Facilitate and assist with the development, testing, validation, and refinement of assess-ment tools, instruments, surveys, and rubrics that can be used by multiple schools and col-leges. Assist interested individual schools/colleges in learning how to validate internally developed tools, instruments, etc. Serve as a clearinghouse for the distribution of these assessment-related materials along with guidelines/recommendations for their appropriate use.

6. Work with the National Association of Boards of Pharmacy to provide schools/colleges with more detailed feedback about their graduates’ performance on the li-censing examination that could better assist with program assessment efforts. Examina-tion performance data could be subdivided into additional categories corresponding to specific, individual outcomes/objectives, eg, literature evaluation skills, detection and evaluation of drug interactions, ability to se-lect appropriate therapeutic agents, etc.

7. Facilitate and assist with the evaluation of various instruments used to measure learning styles and with determining if, when, and how they should be used. Recommend and distrib-

ute specific ones for use by interested schools/colleges.

8. Facilitate and assist with the development of progress testing examinations that can be used early in the curriculum, late in the curriculum, and for various disciplines. Make these exami-nations available to schools/colleges for use as desired within their curricula or assessment plans. Consider establishing a center to which schools/colleges could confidentially or anony-mously submit examination results, with com-piled and analyzed data provided to those inter-ested.

9. Obtain and distribute data on the uses and effec-tiveness of various technologies in pharmacy education.

10. Assist schools/colleges with the development of guidelines for incorporation of the scholarship of assessment into the scholarship of teach-ing/learning.

11. Modify the current CAPE Educational Out-comes and establish a process for ensuring their revision/updating on a regular basis.

ACPE

Since ACPE has standards in place that address the following, it is recommended that focus continue to be placed on these areas:

1. Ensure that appropriate curriculum development and assessment cultures exist at schools/colleges. The administration at all levels should be both supportive of and knowledgeable about assessment.

2. Ensure that there is an institutional commitment to faculty development, with opportunities pro-vided related to teaching/learning, curriculum, and assessment. Regularly scheduled depart-ment and school meetings/discussions/ workshops should be held related to curriculum, assessment, and discussion of the program’s as-sessment plan.

3. Ensure that students’ and appropriate external stakeholders’ input are included throughout the various steps of a school’s/college’s assessment process.

4. Evidence of strong leadership and adequate re-sources/support should exist for the develop-ment and implementation of the assessment plan. All faculty should also be knowledgeable about and involved in the assessment process.

5. Ensure that schools/colleges have well-defined, appropriate student learning outcomes as well as course assessment methods that are consistent

17

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. with the learning outcomes. Each course/experience should be able to indicate the learning outcomes they address, as well as how their mastery by students is assessed.

van der Vleuten CPM, Dolmans DHJM, Scherpbier AJJA. The need for evidence in education. Med Teach. 2000;22:246-50. Winslade N. A system to assess the achievement of Doctor of Pharmacy students. Am J Pharm Educ. 2001;65:363-92.

Curriculum Development 6. Ensure that the curriculum contains sufficient room to allow time for students to engage in reflection and to practice needed skills/abilities.

Chalmers RK, Grotpeter JJ, Hollenbeck RG, et al. Ability-based outcome goals for the professional curriculum: a report of the fo-cus group on liberalization of the professional curriculum. Am J Pharm Educ. 1992; 56:304-9. 7. Ensure that schools/colleges have a solid,

well-developed, assessment plan that is re-examined and modified as needed on a con-tinuing basis.

Kern DE, Thomas PA, Howard DM, Bass EB. Curriculum De-velopment for Medical Education: A Six Step Approach. Balti-more, MD: The Johns Hopkins University Press; 1998. Prideaux D. Curriculum design. BMJ. 2003;326:268-70. 8. Evidence should exist that schools/colleges

have initiated appropriate changes to improve student learning when assessment data indi-cate that deficiencies or weaknesses exist. School/colleges should also have a plan in place to assess the effectiveness of any changes made.

Zlatic TD. Abilities-based assessment within pharmacy educa-tion: preparing students for practice of pharmaceutical care. J Pharm Teach. 2000;7:5-27.

Educational Strategies Barrows HS. A taxonomy of problem-based learning methods. Med Educ. 1986;20:481-6. Cantillon P. Teaching large groups. BMJ. 2003;326:437-40.. 9. Consider the use of a tool such as “Levels of

Implementation,” available from the Higher Learning Commission of the NCA, to assist site evaluation teams in asking appropriate assessment-related questions of schools/colleges.

Hurd PD. Active learning. J Pharm Teach. 2000;7:29-47. Jaques D. Teaching small groups. J Pharm Teach. 2003;326:492-4. Shatzer JH. Instructional methods. Acad Med. 1998;73:S38-S45. Wood DF. Problem based learning. BMJ. 2003;326:328-30.

Learning Styles RECOMMENDED READINGS Curry L. Cognitive and learning styles in medical education. Acad Med. 1999;74:409-13. Assessment - Methods

Beck DE. Performance based assessment: using preestablished criteria and continuous feedback to enhance a student’s ability to perform practice tasks. J Pharm Pract. 2000;13:347-64.

Vaughn L. Teaching in the medical setting: balancing teaching styles, learning styles and teaching methods. Med Teach. 2001;23:610-2. (Tables at: http://www.medicalteacher.org).

Chambers DW, Glassman P. A primer on competency-based evaluation. J Dent Educ. 1997;61: 651-66. Regional accreditation organizations, institutional

expectations Fowell SL, Bligh JG. Recent developments in assessing medi-cal students. Postgrad Med J. 1998; 74:18-24. Middle States Association of Colleges and Schools (MSA).

Available at: www.msache.org. Accessed August 22, 2003 Friedman Ben-David M, Davis MH, Harden RM, Howie PW, Ker J, Pippard MJ. AMEE medical education guide no. 24: portfolios as a method of student assessment. Med Teach. 2001;23:535-51.

New England Association of Schools and Colleges (NEASC-CIHE). Available at: www.neasc.org. Accessed August 22, 2003 North Central Association of Colleges and Schools (NCA-HLC). Available at: www.ncahigherlearningcommission.org. Accessed August 22, 2003

McMullan M, Endacott R, Gray MA, et al. Portfolios and as-sessment of competence: a review of the literature. J Adv Nurs. 2003;41:283-94. Northwest Association of Schools, Colleges and Universities

(NWA). Available at: www.nwccu.org. Accessed August 22, 2003

Schuwirth LWT, van der Vleuten CPM. Written assessment. BMJ. 2003;326:643-5. Selby C, Osman L, Davis M, Lee M. Set up and run an objec-tive structured clinical exam. BMJ. 1995;310:187-90.

Southern Association of Colleges and Schools (SACS). Available at: www.saccoc.org. Accessed August 22, 2003

Smee S. Skill based assessment. BMJ. 2003;326:703-6. Western Association of Schools and Colleges (WASC-ACSCU). Available at: www.wascweb.org. Accessed August 22, 2003 Assessment - Reviews

REFERENCES Palomba CA, Banta TW. Assessment Essentials. San Fran-cisco: Jossey-Bass Inc, Publishers; 1999. 1. Prideaux D. The emperor’s new clothes: from objectives to

outcomes. Med Educ. 2000;34:168-9. Maddux MS. Institutionalizing assessment as learning within an ability based program. J Pharm Teach. 2000;7:141-60. 2. National Center for Postsecondary Improvement (NCPI). A

report to stakeholders on the condition and effectiveness of post-Trent AM. Outcomes assessment planning: an overview with applications in health sciences. J Vet Med Educ. 2002;29:9-19.

18

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89. secondary education, part two: a respectable “B.” Change. 2001; 33:23-38.

18. National Institute for Science Education. Field-tested Assess-ment Learning Guide. Madison WI: University of Wisconsin – Madison. 2003. Available at: www.flaguide.org. Accessed June 10, 2003.

3. Burd S. Will congress require colleges to grade themselves? Chron Higher Educ. 2003;XLIX:A27.

19. van der Vleuten CPM, Dolmans DHJM, Scherpbier AJJA. The need for evidence in education. Med Teach. 2000;22:246-50.

4. Gardner DP, et al. A nation at risk: the imperative for educa-tional reform. Washington DC: National Commission on Ex-cellence in Education (US), 1983. Stock No. 065-000-00177-2. Superintendent of Documents. Washington DC: US Govern-ment Printing Office.

20. Grace M. Learning styles. Br Dent J. 2001;191:125-8. 21. Curry L. Cognitive and learning styles in medical education. Acad Med. 1999;74:409-13.

5. Involvement in learning: realizing the potential of American higher education. Final report of the Study Group on the Con-ditions of Excellence in American Higher Education. Wash-ington DC: National Institute of Education (US), 1984. Stock No. 065-000-00213-2. Superintendent of Documents. Wash-ington DC: US Government Printing Office.

22. Vaughn L. Teaching in the medical setting: balancing teach-ing styles, learning styles and teaching methods. Med Teach. 2001;23:610-2. (Tables accessed at http://www.medicalteacher.org). 23. Hurd PD, Hobson EH. Pharmacy students’ learning style pro-file: course and curricular implications [abstract]. Am J Pharm Educ. 1999;63(suppl):XXS. 6. Kellogg Commission on the Future of State and Land-Grant

Universities. Returning to our roots: toward a coherent campus culture. Fifth report. An open letter to the presidents and chan-cellors of state universities and land-grant colleges, 2000. Na-tional Association of State Universities and Land-Grant Col-leges. Available at: www.nasulgc.org/publications/Kellogg/Kellogg2000_Culture.pdf. Accessed August 25, 2003.

24. Bouldin AS. Determination of learning style preferences at the University of Mississippi School of Pharmacy [abstract]. Am J Pharm Educ. 2002;66(suppl):XXS. 25. Cobb HH, Thomas PC, Schramm LC, Chisholm MA, Fran-cisco GE. Learning style assessment of a second year pharmacy class at the University of Georgia College of Pharmacy [abstract]. Am J Pharm Educ. 2000;64(suppl):XXS. 7. Wegner GR. Beyond Dead Reckoning: Research Priorities

for Redirecting American Higher Education, 2002. National Center for Postsecondary Improvement. Available at: www.stanford.edu/group/ncpi/documents/pdfs/beyond_dead_reckoning.pdf. Accessed August 25, 2003.

26. Fox LM. Influence of student learning style on learning mo-dality preference [abstract]. Am J Pharm Educ. 2000;64(suppl):XXS. 27. Anderson J. Tailoring assessment to student learning styles. AAHE Bulletin 2001(March). Available at: aahebulletin.com/public/archive/archive.asp#assessment. Ac-cessed August 25, 2003.

8. American Council on Pharmaceutical Education. Accreditation Standards And Guidelines for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree Adopted June 14, 1997. Available at: www.acpe-accredit.org/frameset_Pubs.htm. Accessed June 15, 2003.

28. Shuck AA, Phillips CR. Assessing pharmacy students’ learn-ing styles and personality types: a ten-year analysis. Am J Pharm Educ. 1999;63:27-33. 9. Scott DM, Robinson DH, Augustine SC, Roche EB, Ueda

CT. Development of a professional pharmacy outcomes as-sessment plan based on student abilities and competencies. Am J Pharm Educ. 2002;66:357-61.

29. Davis LE, Boyce EG, Blumberg P. Inventory of student ap-proaches to learning and studying during an entry-level Doctor of Pharmacy program [abstract]. Am J Pharm Educ. 2001;102(suppl):XXS. 10. Bouldin AS, Wilkin NE. Programmatic assessment in U.S.

schools and colleges of pharmacy: a snapshot. Am J Pharm Educ. 2000;64:380-7.

30. Kaufman DM. Applying educational theory in practice. BMJ. 2003;326:213-6.

11. Chambers DW, Glassman P. A primer on competency-based evaluation. J Dent Educ. 1997;61:651-66.

31. Friedman Ben-David M. The role of assessment in expanding professional horizons. Med Teach. 2000;22:472-7.

12. Chalmers RK, Grotpeter JJ, Hollenbeck RG, et al. Ability-based outcome goals for the professional curriculum: a report of the focus group on liberalization of the professional curricu-lum. Am J Pharm Educ. 1992; 56:304-9.

32. Irby DM, Wilkerson L. Educational innovations in academic medicine and environmental trends. J Gen Intern Med. 2003;18:370-6. 33. Sprague JE, Christoff J, Allison JC, Kisor DF, Sullivan DL. Development and implementation of an integrated cardiovascular module in a PharmD curriculum. Am J Pharm Educ. 2000;64:20-6.

13. CAPE educational outcomes. Alexandria: American Asso-ciation of Colleges of Pharmacy, 1998. 14. Kern DE, Thomas PA, Howard DM, Bass EB. Curriculum Development for Medical Education: A Six Step Approach. Baltimore, MD: The Johns Hopkins University Press; 1998.

34. Chan ES, Monk-Tutor MR, Sims PJ. Progressive model for integrating learning concepts across pharmacy practice courses to avoid the learn-and-dump phenomenon [abstract]. Am J Pharm Educ. 2001;65(suppl):XXS.

15. Bloom BS (ed). Taxonomy of Educational Objectives, handbook I: Cognitive domain. New York: Longmans, Green; 1956. 35. Deloatch KH, Joyner PU, Raasch RH. Integration of general

and professional abilities across the Doctor of Pharmacy curricu-lum at the University of North Carolina [abstract]. Am J Pharm Educ. 2001;65(suppl):XXS.

16. Commission to Implement Change in Pharmaceutical Edu-cation, Background Paper II. Alexandria: American Associa-tion of Colleges of Pharmacy, 1991.

36. Wolf W, Besinque KH, Wincor MZ. Flexible and integrated approach to professional teaching: therapeutics module [abstract]. Am J Pharm Educ. 2000;64(suppl):XXS.

17. Beck DE. Performance based assessment: using preestab-lished criteria and continuous feedback to enhance a student’s ability to perform practice tasks. J Pharm Pract. 2000;13: 347-64.

19

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89.

20

37. Stull R, Billeter M, Carter R, et al. Integrating the curricu-lum at Shenandoah University School of Pharmacy [abstract]. Am J Pharm Educ. 1998;62(suppl):XXS. 38. Modell HI. Preparing students to participate in an active learning environment. Am J Physiol. 1996;270:S69-S77. 39. Maudsley G. Do we all mean the same thing by ‘problem-based learning’? A review of the concepts and a formulation of the ground rules. Acad Med. 1999;74:178-85. 40. Cisneros RM, Salisbury-Glennon JD, Anderson-Harper HM. Status of problem-based learning research in pharmacy education: a call for future research. Am J Pharm Educ. 2002;66:19-26. 41. Abate MA, Meyer-Stout PJ, Stamatakis MK, et al. Devel-opment and evaluation of computerized problem-based learn-ing cases emphasizing basic sciences concepts. Am J Pharm Educ. 2000;64:74-82. 42. Carlson S. Are personal digital assistants the next must-have tool? Chronicle of Higher Education. 2002;49:A33-36. 43. Bertling CJ, Simpson DE, Hayes AM, et al. Personal digi-tal assistants herald new approaches to teaching and evaluation in medical education [abstract]. Am J Pharm Educ. 2003;67(suppl):XXS. 44. Zarotsky V, Jaresko GS. Technology in education – Where do we go from here? J Pharm Pract. 2000;13:373-81. 45. Piascik P. What if we approached our teaching like we ap-proach our research? Am J Pharm Educ. 2002; 66:461-2. 46. Beck DE. Pharmacy educators: can an evidence-based ap-proach make your instruction better tomorrow than today? Am J Pharm Educ. 2002;66:87-8. 47. Harden RM. Curriculum mapping: a tool for transparent and authentic teaching and learning. Med Teach. 2001; 23:123-37. 48. Draugalis JR, Slack MK, Sauer KA, Haber SL, Vaillan-court RR. Creation and implementation of a learning outcomes document for a Doctor of Pharmacy curriculum. Am J Pharm Educ. 2002;66:253-60. 49. Scott DM, Roche EB, Augustine SC, Robinson DH, Ueda CT. Assessment of students’ abilities and competencies using a curriculum mapping procedure. Am J Pharm Educ. 2001;65(suppl):XXS. 50. Zavod RM, Zgarrick DP. Appraising general and profes-sional ability based outcomes: curriculum mapping project. Am J Pharm Educ. 2001;65(suppl):XXS. 51. Bouldin AS, Wilkin NE, Wyandt CM, Wilson MC. Gen-eral and professional education abilities: identifying opportuni-ties for development and assessment across the curriculum. Am J Pharm Educ. 2001;65(suppl):XXS. 52. Maki P. Using multiple assessment methods to explore stu-dent learning and development inside and outside of the class-room. Washington DC: National Association of Student Personnel Administrators, 2002. Available at: http://www.naspa.org/NetResults/article.cfm?ID=558. Ac-cessed June 29, 2003. 53. Astin AW. Assessment for Excellence: The Philosophy and Practice of Assessment and Evaluation in Higher Educa-tion. New York:Macmillan/American Council on Education; 1991.

54. Pascarella ET, Terenzini PT. How College Affects Students: Findings and Insights from Twenty Years of Research. San Fran-cisco:Jossey-Bass Inc, Publishers; 1991. 55. Trent AM. Outcomes assessment planning: an overview with applications in health sciences. J Vet Med Educ. 2002;29:9-19. 56. Banta TW. Moving assessment forward: enabling conditions and stumbling blocks. New Directions for Higher Educ. 1997;25:79-91. 57. James Madison University. Characteristics of an effective as-sessment program. Harrisonburg VA: Center for Assessment and Research Studies, 2003. Available at: http://www.jmu.edu/assessment/effect.shtml. Accessed June 10, 2003. 58. Vroeijenstijn MA. Quality assurance in medical education. Acad Med. 1995;70 (suppl): S59-S67. 59. Fowell SL, Southgate LJ, Bligh JG. Evaluating assessment: the missing link? Med Educ. 1999;33:276-81. 60. NCA/The Higher Learning Commission. Assessment of stu-dent academic achievement: levels of implementation. Chicago: NCA/The Higher Learning Commission, 2002. Available at: www.higherlearningcommission.org/resources/assessment/index.html. Accessed June 20, 2003. 61. Palomba CA, Banta TW. Assessment Essentials. San Fran-cisco: Jossey-Bass Inc, Publishers; 1999. 62. Winslade N. A system to assess the achievement of Doctor of Pharmacy students. Am J Pharm Educ. 2001;65:363-92. 63. Boyce EG. A guide for doctor of pharmacy program assess-ment. Alexandria: American Association of Colleges of Phar-macy, 2000. Available at: http://www.aacp.org/Docs/MainNavigation/Resources/5416_pharmacyprogramassessment_forweb.pdf. Accessed August 31, 2003. 64. Fowell SL, Bligh JG. Recent developments in assessing medical students. Postgrad Med J. 1998;74:18-24. 65. Schuwirth LWT, van der Vleuten CPM. Written assessment. BMJ. 2003;326:643-5. 66. Epstein RM, Hundert EM. Defining and assessing profes-sional competence. JAMA. 2002;287:226-35. 67. Hatala R, Norman GR. In-training evaluation during an inter-nal medicine clerkship. Acad Med. 1999;74:S118-S20. 68. Cunnington J. Evolution of student assessment in McMaster University’s MD programme. Med Teach. 2002;24:254-60. 69. Purkerson DL, Mason HL, Chalmers RK, Popovich NG, Scott SA. Expansion of ability-based education using an assessment center approach with pharmacists as assessors. Am J Pharm Educ. 1997;61:241-8. 70. James Madison University - Dictionary of Student Outcome Assessment. Harrisonburg VA: Center for Assessment and Re-search Studies, 2003. Available at: http://www.jmu.edu/assessment/aresource.shtml. Accessed June 30, 2003. 71. Wilson M, Sloane K. From principles to practice: an embed-ded assessment system. Appl Meas Educ. 2000;13:181-208. 72. Maddux MS. Institutionalizing assessment as learning within an ability based program. J Pharm Teach. 2000;7:141-60.

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89.

21

Appendix 1. Glossary of Terms

Term in Paper Definition in Paper Similar Terms Seen in Literature Ref #

Assessment The ongoing systematic process that involves the development of student learning outcomes and the collec-tion, review, and use of information about those outcomes for the pur-pose of improving student learning and development and to enhance program quality

Evaluation, Student learning outcome as-sessment, program-matic assessment

61, 70

Authentic evalua-tion or assessment*

Determination of knowledge or per-formance in a context that resembles actual practice

11, 17

Competence What students are able to achieve and demonstrate in an environment that replicates conditions found in a real-life practice setting

Performance 55, 62

Course-embedded assessment

Determining the extent of student learning in a program by obtaining, analyzing, and interpreting data gen-erated at the course level

Course-based as-sessment, Embedded assessment

61, 71

Direct measures Methods that require students to ac-tually display or demonstrate the extent of their learning

64, 70

Evaluation A measure of or conclusion about the extent of student learning that can be formative or summative

Assessment 55, 70

Formative evalua-tion or assessment*

Providing feedback for improvement and not for making final decisions, accountability, or to affect student progress through a course or curricu-lum

64, 70

Generalizability Global reliability that includes inter-rater reliability as well as other con-tributing factors

62

Goals Broad or general expectations for learning concepts or student learning

55, 70

Indirect measures Obtaining reflections or opinions about student learning without actual demonstration

64, 70

American Journal of Pharmaceutical Education 2003; 67 (3) Article 89.

22

Appendix 1. Glossary of Terms (Continued)

Term in Paper Definition in Paper Similar Terms Seen in Literature Ref #

Performance Routine, repeated engagement in professional activities in an actual real-life practice setting

Competence 55, 62

Performance crite-ria

Specific descriptions and expecta-tions of what students need to know or do to successfully achieve a learn-ing outcome that also serve as the basis for making judgments about achievement

72

Performance evaluation or as-sessment*

Measures to determine students’ abilities and skills that are under-taken in real-life, non-simulated set-tings

Competency evalua-tion, Competency assessment

62

Rubric A tool used for scoring or evaluating work that specifies the criteria used to differentiate levels of performance (eg, what is needed to achieve a score of excellent [4] vs. good [3] vs. satisfactory [2] vs. poor [1])

Checklist, Rating scale

70

Student learning outcome

The specific knowledge, skills, and behavior/attitudes/values that stu-dents are expected or intended to achieve through their college experi-ence

Objective, Goal, Out-come, Educational outcome, Ability-based outcome, Competency, Pro-grammatic outcome

55, 70

Summative evalua-tion

A sum total or final product measure of achievement at the end of an in-structional unit or course of study that can affect a student’s progress in a course or curriculum

Summative assess-ment

64, 70

Value-added A positive change in learning out-comes measured following instruc-tion or educational experiences; the difference in students’ knowledge, skills and attitudes before vs. after completing the educational experi-ence

Learning gain, Longi-tudinal change

70

*Use of the term “assessment” or “evaluation” should depend upon the context in which it is being used (assessment = ongoing, systematic, generates findings that are used to make changes; evaluation = measure of or conclusion about learning)