sizzle - june 2009 issue - university of kentucky · the major responsibilities of loacs include...

9
The Learning Outcomes Assessment Coordinators (LOAC) has grown in number of members since its conception in January 2009. The group originally had eight (8) coordinators from six (6) colleges and two (2) academic support units. The Office of As- sessment organized LOAC meetings to facilitate both adherence to the Provost’s Learning Initiative (PLI) deadlines and the dissemination of informa- tion about assessment best practices. At this time, all 17 colleges and four (4) academic support de- partments are represented (see Table 1). The major responsibilities of LOACs include consul- tation and training, as well as college and program assessment, and data analysis and reporting. LOACs often provide assessment consultation services to faculty, staff, and administrators on topics such as best practice and discipline-specific strategies for assessment of student learning at the course and program levels. They also report as- sessment results and use evidence drawn from these results to improve student learning. For the past year, LOACS have been working with pro- grams, departments, and administrative units to articulate learning outcomes, map curriculums, and design data collection instruments and approaches for documenting assessment and continuous im- APRIL 2010 VOLUME 2, ISSUE 4 SPECIAL POINTS OF INTEREST: Assessment Coordinators in Full Force Assessment In and Around UK What is a Rubric? Quick Assessment Facts New LOAC Upcoming Assessment Workshops SIZZLE What’s Hot in Assessment Assessment Coordinators in Full Force provement of student learning. Because their efforts in college-wide and program-specific assessment activi- ties are valuable in the promotion of ongoing compli- ance with accreditation as well as other issues related to quality assurance and improvement, LOACs are also playing an active role in the preparations for the SACS reaffirmation of accreditation in 2013. Currently, the monthly LOAC meetings serve as an important communication venue for the Office of As- sessment on many fronts: following up on deliverables for the PLI, preparing for SACS reaffirmation, ensuring that UK’s various colleges and support units have the information and resources they need to analyze and interpret student learning data, and to formulate and implement learning improvement action plans. Through these sorts of collaborative coordinating efforts, the Office of Assessment hopes to establish and nurture a university-wide culture of assessment. Table 1. List of Learning Outcomes Assessment Coordinators

Upload: others

Post on 09-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Sizzle - June 2009 Issue - University of Kentucky · The major responsibilities of LOACs include consul-tation and training, as well as college and program assessment, and data analysis

The Learning Outcomes Assessment Coordinators (LOAC) has grown in number of members since its conception in January 2009. The group originally had eight (8) coordinators from six (6) colleges and two (2) academic support units. The Office of As-sessment organized LOAC meetings to facilitate both adherence to the Provost’s Learning Initiative (PLI) deadlines and the dissemination of informa-tion about assessment best practices. At this time, all 17 colleges and four (4) academic support de-partments are represented (see Table 1). The major responsibilities of LOACs include consul-tation and training, as well as college and program assessment, and data analysis and reporting. LOACs often provide assessment consultation services to faculty, staff, and administrators on topics such as best practice and discipline-specific strategies for assessment of student learning at the course and program levels. They also report as-sessment results and use evidence drawn from these results to improve student learning. For the past year, LOACS have been working with pro-grams, departments, and administrative units to articulate learning outcomes, map curriculums, and design data collection instruments and approaches for documenting assessment and continuous im-

A P R I L 2 0 1 0 V O L U M E 2 , I S S U E 4

S P E C I A L

P O I N T S O F

I N T E R E S T :

Assessment

Coordinators

in Full Force

Assessment

In and

Around UK

What is a

Rubric?

Quick

Assessment

Facts

New LOAC

Upcoming

Assessment

Workshops

SIZZLE What’s Hot in Assessment

Assessment Coordinators in Full Force

provement of student learning. Because their efforts in college-wide and program-specific assessment activi-ties are valuable in the promotion of ongoing compli-ance with accreditation as well as other issues related to quality assurance and improvement, LOACs are also playing an active role in the preparations for the SACS reaffirmation of accreditation in 2013. Currently, the monthly LOAC meetings serve as an important communication venue for the Office of As-sessment on many fronts: following up on deliverables for the PLI, preparing for SACS reaffirmation, ensuring that UK’s various colleges and support units have the information and resources they need to analyze and interpret student learning data, and to formulate and implement learning improvement action plans. Through these sorts of collaborative coordinating efforts, the Office of Assessment hopes to establish and nurture a university-wide culture of assessment.

Table 1. List of Learning Outcomes Assessment Coordinators

Page 2: Sizzle - June 2009 Issue - University of Kentucky · The major responsibilities of LOACs include consul-tation and training, as well as college and program assessment, and data analysis

P A G E 2 V O L U M E 2 , I S S U E 4

Assessment In and Around UK

Professional programs, such as in the School of Journalism and Telecommunication, may seem to have an edge in the assessment proc-ess because their accreditation depends on setting measurable goals, creating a system to track those goals, and often designating some-one to keep track of the process. The School of Journalism and Telecommunications must meet nine specific core standards and compe-tencies set by the Accrediting Council on Education in Journalism and Mass Communication (ACEJMC), its accrediting agency, or the School will not be reaccredited. But, our students do many things and we are learning to document those “other things.” Some of the pro-jects, activities, and research that you do in your departments may be so ingrained in your discipline that you do not think to assess them or even see them as products suitable for assessment. Colleagues from other departments come to us with “good ideas” for collaboration. Generally that means extra work for faculty, but results in rewarding experiences for students. Also, our students enter regional and national contests. It is likely that many of you have been ap-proached to collaborate, and just as likely your students enter academic contests. Are you setting goals as your collaborations continue and tracking progress of student success in contests? An example of “a good idea” assessment Three years ago, Professor Robert Farley of the Patterson School of Diplomacy walked into my office in early February, introduced himself, and described the annual simulation the Patterson School constructs for its students. It involves an international incident. Students are as-signed roles in the State Department, the CIA, the federal government, the U.S. military, and whatever international players may be in-volved. It is a complex, daunting scenario. Dr. Farley asked if journalism students could join the group, thus introducing media to the mix of challenges presented to his students. The simulation was about two weeks away. I promised to do my best, but in the end, could only con-vince colleagues to offer students extra credit. One or two students dropped by during the 24-hour exercise, asked questions and wrote stories for class. The Kernel did a story and one broadcast student interviewed Dr. Farley and Ambassador Carey Cavanaugh and the story ran on what was then our JAT News. We talked afterward and agreed if journalism students were to be more involved, faculty needed to be involved much earlier. Our goal: to get involved earlier and to have enough students adequately prepared to staff the simulation and not make any major mistakes. Year two: Dr. Farley emailed in August, just after the stars aligned and the School was able to hire the amazing, creative and energetic Kakie Urch as an assistant professor specializing in new media. She jumped at my plea to help out with this project and began talks and meetings immediately with Dr. Farley. Students from three journalism classes participated. Two blogs were constructed, one to simulate CNN, one to simulate Al-Jazeera. A total of 18 news stories plus a summary were posted to INN, International News Network. Also posted were one map, eight photos, and five videos. We had one request to clarify a story. A total of five stories, two photos, and one video were posted to GNN, Gulf News Network. The goals for 2010: 1) to post more video; 2) to set up a consistent system to receive information; 3) to have students write more, even if the stories were brief or headlines; 4) to manage sites better so there would not be such a disparity of stories between the two sites. Year three: February 2010. Professor Urch spent more time on collaboration. The blogs were up and ready to go before the simulation began. Each had a front page for news as well as separate video and photo galleries. Students volunteered based on what they had heard from those who worked the last year. (Instructive and fun!) I was assigned to set up Gmail accounts to manage incoming information. Re-

Creative Ways of Assessing Student Learning By Scoobie Ryan

In this issue, we are highlighting assessment activities from the School of Journalism and Telecommunications (JAT) and UK Campus Rec-reation. Professor Scoobie Ryan, from JAT, talks about nontraditional methods of assessment in her article, Creative Ways of Assessing Student Learning. She tells the story of how these methods may provide evidence of student learning as well as track students’ progress in skill development. And it all started with a “good idea.” In her article, New Tricks for Old Dogs, Kathy Rose, UK Campus Recreation Facilities Director, shares her department’s experience in using new tools for assessing recreational programs and services for UK students. Kathy also talks about the assessment process in the department, from how they developed an assessment instrument to using gathered information to guide them in devising and implementing an improvement plan to provide better services to UK students.

Page 3: Sizzle - June 2009 Issue - University of Kentucky · The major responsibilities of LOACs include consul-tation and training, as well as college and program assessment, and data analysis

P A G E 3

New Tricks for Old Dogs By Kathy Rose Formal program assessment has become the standard for university Campus Recreation Departments nationwide that are striving to better serve their students. The University of Kentucky Campus Recreation Department has taken up this gauntlet and is running full speed ahead into the assessment realm…well, as fast as our wobbly legs can carry us. Some of the skills required in assessment, such as creating valid assessment instruments, data collection, and data interpretation, are fairly new skills for our well-seasoned staff. Still, we have persevered, and with the help of several UK experts, have developed an assessment tool that targets improvement of recreational programs and ser-vices for our students. The assessment tool we created is an in-house survey aptly entitled, The UK Campus Recreation Survey. Our purpose for creating the tool was twofold: 1) to determine user satisfaction with existing programs and services with the intention to improve these, and 2) to determine barriers for students who do not utilize our programs and services. The instrument was modeled after a 2008 Student Voice Recreational Impact Survey, which we customized for our department. Alex Henchy, our doctoral research associate, provided input and used a program called Qualtrics to create the online survey. We printed paper copies of the survey and tested it on a class of 25 Kinesiology and Health Promotions (KHP) students. We were pleased that all students completed the survey within our timeframe of 10-15 minutes. Incidentally, the test group was a Research Methods class. Both the KHP students and our department reaped benefits from this activity. After using the test group’s input to tweak the survey, we went live with the online version last February 12, 2010. The online survey was sent to a random sample of 2500 UK students. Obtaining the sample was a bit of a challenge, but we were able to obtain student information from the Registrar’s Office with the help of Todd Brann, Associate Director of Admissions/Registrar. Each stu-dent received an email stating that the purpose of the survey was to obtain students’ opinions about future improvement of the Johnson Center and other campus recreation programs and services. Our goal was to have at least a 10% response rate. We were close to this goal, as we received 237 completed surveys. We are now in the process of interpreting the data. Most responses are straightforward and uncomplicated. Examples of responses are: the Johnson Center lacks adequate parking, needs more stretching areas and more lockers, and students have trouble scheduling time to exer-cise. Responses have also indicated that our department is providing a valuable service for our students. Participation in our programs and utilizing our facilities improves students’ concentration, stress management, weight management, and overall health. Students enjoy partici-pating in our programs and services, and they think that what we offer improves the quality of life at UK. Currently, Alex is working with the data to determine if there are any special relationships within subgroups. Once we have a thorough understanding of the data, we are looking forward to devising and implementing an improvement plan that will provide even better services for our students. And maybe we will have learned a little bit more about assessment ourselves to prepare us for the next round. Not bad for a bunch of old dogs!

sults: 36 news stories posted to INN plus a summary of the simulation; 10 photos on the front page, two graphics, one map, and three videos. GNN had 34 news stories posted: one editorial, one photo, and one video. There were four videos posted in GNN’s video gallery. The discrepancy be-tween the number of news stories posted on INN and GNN virtually disappeared. We met the goals of more video posted; the email/iChat system of receiving information worked, and students wrote more. We have goals for year four; but that is not the point. The point is that it is likely someone in your department has a unique project involving stu-dents and can chart it. If you can document it, you can assess it. An example of a contest assessment The Hearst Journalism Awards Program is an annual three-pronged (newswriting, photojournalism, broadcast journalism) contest under the aus-pices of the Association of Schools of Journalism and Mass Communication and funded by the William Randolph Hearst Foundation. Our students enter every month and we track their progress. Approximately 113 schools are eligible for the contest (only students from ACEJMC-accredited programs may enter). Our goal has been to have a top 10 finish each year. In 2006, UK finished 6th in Newswriting and 8th in Photojournalism; 2007, 5th in Newswriting; 8th in Photo; 2008, 8th in Photo; 2009, 5th in Newswriting, 7th in Photo. In the current competition (covering the 2009-10 academic year), our students finished 6th in Photo and are currently 7th in Newswriting, and one of our students has a first-place newswriting prize. What contests are available to your students? Are they entering them and are you tracking their progress? If not, why not? If there is a national or regional organization that can judge student work, find it and drive student work to it. These organizations will be doing your assessment for you. All you need to do is keep track of the results.

Page 4: Sizzle - June 2009 Issue - University of Kentucky · The major responsibilities of LOACs include consul-tation and training, as well as college and program assessment, and data analysis

P A G E 4

What is a Rubric? Curriculum-embedded, performance-based assessment, the approach adopted by UK’s Office of Assessment, employs rubrics as the pri-mary measurement device. Because the use of rubrics to assess learning at the program-level is new to many UK faculty, OA staff are fre-quently asked to explain basic rubric construction, and how they function in program-level assessment. Many faculty are familiar with using rubrics at the course level, to clearly communicate expectations while decreasing time spent grading. A course level rubric is a tool that ex-plicitly and systematically lists scoring criteria and their performance indicators. For example, a rubric designed to evaluate an essay assign-ment might help students understand that their work will be judged on theme, organization, details, voice, and grammar. Moreover, such a rubric should also provide performance indicators (concise descriptions of the observable characteristics of each point on the rubric scale) for each criterion. Under mechanics, the rubric might define the lowest score category as “numerous misspellings, grammatical and punctuation errors,” and the highest score category as “all words spelled correctly, virtually no grammatical or punctuation errors.” Thus, course-level rubrics benefit both instructors and students, because they enable instructors to grade fairly and consistently, while reducing the time spent on grading and communicating clearly and unambiguously to students the reasons their work received a particular grade. The usefulness of rubrics has not only been established for course-level pedagogy and evaluation, but for program-level assessment, as well. In this sense, rubrics can serve both as a valid and reliable means of gathering quantita-tive data on complex performances of learning, and as a clear articulation of how student performance is linked to specific courses and programs. In addition, using rubrics at the program level can increase consistency in grading across sections, courses, programs, or even colleges. Because the Office of Assessment’s primary charge is program-level as-sessment, our energies are focused on helping faculty gain familiarity with and/or develop rubrics for program-level assessment purposes. Such rubrics are organized around a single learning outcome. Discrete outcome criteria are generally listed on the Y-axis, with a four to six point scale of perform-ance on the X-axis. At the intersection of each criterion and performance level are performance indicators. Performance indicators are descriptions of observable behaviors that precisely describe each point on the scale for each criterion. Please refer to the adjacent diagram to locate each element on the example rubric. Rubrics vs Scoring Guides

A scoring guide is a very basic form of rubric that articulates criteria and a scoring scale, but not performance indicators. Because they lack the precise descriptors that help guide an evaluator’s decision, scoring guides result in unacceptably low (<0.70) inter-rater reli-ability (i.e., a lack of consistency between evaluators/evaluations). On the other hand, any rubric is better than no rubric, so using a scoring guide will result in more consistency in scoring or grading than not using any rubric at all.

Types of Rubrics Assessment professionals generally classify rubrics into three basic types: analytic, holistic, and “hybrid.” In an analytic rubric, each perform-ance indicator is assigned a numerical value. The final score is the sum of indicator values; sometimes other arithmetic operations are per-formed on the sum of the indicator values, such as dividing the sum of indicators values by the number of criteria. Analytic rubrics generally result in unacceptably low levels of inter-rater reliability (this can be mediated through rigorous norming/calibration and use of anchor sam-ples) and, perhaps more importantly, are quite labor- and time-intensive. Analytic rubrics are most useful when the purpose is to “drill down” deeply into data. Holistic rubrics differ significantly from analytic rubrics in that they are used essentially to evaluate the “big picture”: instead of assigning point values to each and every performance indicator and then summing them, an evaluator who uses a holistic rubric assigns a single score to the whole sample/performance of learning. The advantages of a holistic rubric include acceptable inter-rater reliability (very high when con-ventional norming/calibration and anchor samples are used) and quick, relatively easy scoring. The major disadvantage is that the single score masks exact performance on each indicator. Holistic rubrics are best used for large scale or quick assessments.

(Continued on page 8)

Page 5: Sizzle - June 2009 Issue - University of Kentucky · The major responsibilities of LOACs include consul-tation and training, as well as college and program assessment, and data analysis

P A G E 5 V O L U M E 2 , I S S U E 4

Quick Assessment Facts This section of Sizzle! features assessment data that addresses three questions: “Who are our students?”; “What are our stu-dents learning?”; and “How effective is the Office of Assessment in supporting assessment and improvement of learning at UK?”

Who are our students? Student Characteristics, Fall 2009

Source: University of Kentucky College Portrait, http://www.collegeportraits.org/KY/UK/characteristics

Geographic Distribution of Degree-Seeking Students

Student Level and Enrollment Status

By the Numbers

Source: National Survey of Student Engagement, 2009, http://www.uky.edu/IRPE/students/surveys/nsse.html

79% 55%

44% 61%

40% 49%

32% 24%

Of UK freshmen plan to do practicum, in-ternships, co-op experience, field experi-ence, or clinical assignment

Of UK seniors completed practicum, intern-ships, co-op experience, field experience, or clinical assignment

Of UK freshmen already did community service or volunteer work

Of UK seniors did community service or volunteer work

Of UK freshmen are undecided on working on a research project with a faculty member outside of course or program requirements

Of UK seniors do not plan to work on a re-search project with a faculty member outside of course or program requirements

Of UK freshmen plan to work on a research project with a faculty member outside of course or program requirements

Of UK seniors worked on a research project with a faculty member outside of course or program requirements

Page 6: Sizzle - June 2009 Issue - University of Kentucky · The major responsibilities of LOACs include consul-tation and training, as well as college and program assessment, and data analysis

P A G E 6

S I Z Z L E

What have our students learned at UK? The National Survey of Student Engagement (NSSE) is a product of systematic but indirect studies of student learning and develop-ment linked empirically to student experiences and behaviors over the past years. NSSE surveys first-year and senior students at partici-pating baccalaureate–granting colleges and universities to assess the extent to which they engage in and are exposed to proven educa-tional practices that are believed to correspond to desirable learning outcomes. The survey is administered in the spring term. Results from the NSEE may be used to improve the undergraduate experience. At UK, student engagement represents two important aspects of collegiate quality, namely, the amount of time and effort students put into their studies and other meaningful activities, and how the institution deploys resources and organizes its curriculum and other learning opportunities. Moreover, student engagement cor-relates with student learning and retention. By itself, NSSE only supplies indirect measures of student learning, but the results may be useful to refine the general education goals. Below are graphs of mean scores for perceived institutional contributions to students’ knowl-edge, skills, and personal development in specific areas as reported by first-year and senior students through the years 2001 to 2009. The mean is based on a scale from 1 to 5, where 1=Very Little to 4=Very much. For more information about NSSE, please visit http://nsse.iub.edu/

Source: UK Institutional Research , Planning, & Effectiveness, http://www.uky.edu/IRPE/students/surveys/nsse.html

Mean Scores of Extent of Institution’s Contribution to Acquiring Job or Work-Related Knowledge and Skills

Mean Scores of Extent of Institution’s Contribution to Thinking Critically and Analytically

Mean Scores of Extent of Institution’s Contribution to Knowledge and Skills in Analyzing Quantitative Problems

Means Scores of Extent of Institution’s Contribution to Under-standing People of Other Racial and Ethnic Backgrounds

Page 7: Sizzle - June 2009 Issue - University of Kentucky · The major responsibilities of LOACs include consul-tation and training, as well as college and program assessment, and data analysis

Dr. Anna Bosch, Associate Dean for Undergraduate Programs in the College of Arts and Sciences, is representing Arts and Sciences on the UK Learning Outcomes Assessment Committee. As a faculty member in Linguistics and English since 1990, she has been a member of various committees on undergraduate education, including the Undergraduate subcommittee of the University Committee on Academic Planning and Priorities (UCAPP), the President’s Initiative on Undergraduate Education, the University Studies Program Committee, and the A&S College Council. In spring 2009, Dr. Bosch was also a member of one of the General Education Curricular Teams . As Director of the Linguistics Program from 1997 to 2006, she revised the requirements for the undergraduate Linguistics program and had full responsi-bility for advising majors in the program. With a long-standing commitment to issues of undergraduate education, Dr. Bosch’s active en-gagement in all aspects of undergraduate education will prove to be invaluable to her role as a Learning Outcomes Assessment Coordina-tor. Dr. Bosch earned a Bachelor of Arts from Wesleyan University in Linguistics and the College of Letters. She received her Master of Arts and PhD in Linguistics from the University of Chicago. She has taught in France as a language instructor and has done field research in Scotland. In 1990, she came to UK as an assistant professor in English and Linguistics. In 1996, she became an associate professor and the following year was given the position of Linguistics Program Director, a position she held for nine years. Her research and teaching interests include phonological theory, dialectology field methods, endangered and minority languages, and pedagogy of linguistics. Dr. Bosch is married to an English professor and has twin 10-year-old boys.

P A G E 7 V O L U M E 2 , I S S U E 4

How Effective is the Office of Assessment ? In the first quarter of 2010, the Office of Assessment has facilitated nine (9) assessment workshops and five (5) special workshops for individ-ual colleges and/or departments. A total of 88 faculty and staff have attended the various assessment seminars, whereas 49 participated in specially designed workshops for their respective college or department. Participants were asked to provide feedback through workshop evaluations. Having completed the workshop, each participant was asked whether they understood what their respective units need to do to meet the Provost’s Learning Initiative timeline and if they found the content presented useful. Summary of responses are in the pie charts

New Learning Outcomes Assessment Coordinator (LOAC) WELCOME!

Page 8: Sizzle - June 2009 Issue - University of Kentucky · The major responsibilities of LOACs include consul-tation and training, as well as college and program assessment, and data analysis

P A G E 8 V O L U M E 2 , I S S U E 4

SACS Reaffirmation of Accreditation Timeline November, 2009

QEP Pre-Planning Team established: Representative group of university faculty, staff, and administrators.

December, 2009

QEP Pre-Planning Team meetings: Purpose is to develop a process that will ensure broad university input into the selec-tion of an acceptable Quality Enhancement Plan (QEP) or topic.

January, 2010

QEP Topic Selection Plan finalized: The timeline for the topic selection team will be completed.

January, 2010 – December, 2010

QEP Topic Selection Team established: Widely representative group conducts research, determines, and writes an accept-able QEP or topic.

June 13, 2011

Orientation of Leadership Teams: Team provides oversight for the Compliance Review and the Quality Enhancement Plan (QEP).

September 10, 2012

Compliance Certification Due: A written review and analysis of UK’s compliance to the Core Requirements and Comprehen-sive Standards (replaces the Self Study).

November, 2012 Off-Site Peer Review Conducted: A SACS-appointed team

meets off-campus to review UK’s Compliance Certification and support documentation. This team will make a prelimi-nary determination regarding compliance with the Core Requirements and Comprehensive Standards.

December, 2012 - January, 2013

Quality Enhancement Plan Due (and optional focused re-port): A clear and succinct review of the Quality Enhance-ment Plan is delivered to the Commission on Colleges. This document includes a review of the process used to develop the QEP, the determination of the topic, the desired student learning outcomes, a literature review and best practices, the actions to be implemented, a timeline, the QEP organ-izational structure, a listing of resources, and an assess-ment plan.

January, 2013 – April, 2013

Onsite Peer Review Conducted: A SACS-appointed team will visit UK to assess the university’s compliance with the Core Requirements and Comprehensive Standards. They will also determine the acceptability of UK’s Quality Enhance-ment Plan. This group will provide a report to UK with analysis and advice concerning UK’s reaffirmation.

December, 2013

Review by the Commission on Colleges: Group will review the finding of the Reaffirmation Committee and UK’s re-sponse to the findings. This group will determine UK’s reaf-firmation status.

The third type of rubric, the hybrid rubric, is a fairly recent innovation that may have the most to offer assessment in the majors and course-level assessment. Hybrid rubrics essentially combine the best elements of analytic and holistic rubrics, while tempering the disadvantages of each. One useful approach to constructing a hybrid rubric is to designate some rows (or columns) that address program-wide outcomes (such as general education learning outcomes) as holistic, while the remaining rows (or columns) address specific aspects of the assignment or course learning as analytic. The two scores can be calculated separately for program-level assessment purposes, but also combined to produce a single grade for course-level evaluation. The purpose of this discussion was to answer some of the most common questions about rubrics and their use in assessment of learning. For your lingering questions and concerns, the Office of Assessment is pleased to announce that it now offers a special sequence of work-shops on rubric development and refinement. The Rubrics I workshop focuses on the basic elements of a rubric and how to begin rubric development. The Rubrics II workshop will be offered to individual programs and colleges by special request, and will be tailored to meet your specific rubric needs. Please contact Dr. Marsha Watson ([email protected]), Tara Rose ([email protected]), or Leah Simpson ([email protected]) to register for a Rubrics I workshop or schedule a Rubrics II workshop for your college, department, or unit.

(Continued from page 4)

What is a Rubric?

Page 9: Sizzle - June 2009 Issue - University of Kentucky · The major responsibilities of LOACs include consul-tation and training, as well as college and program assessment, and data analysis

Mission

The mission of the Office of Assessment is to provide university-wide support for assessment of student learning, planning, and continuous im-provement activities at course, program, and insti-tutional levels, and to develop and sustain across the university community a culture of assessment.

Values

Continuous improvement of student learning

Shared responsibility, shared resources, and collaboration

Use of technology to achieve efficiency

Responsiveness and communication

Rigorous, authentic, and useful assessment data

Upcoming Assessment Workshops

www.uky.edu/IRPE/assessment.html

For more information, please contact us at: Office of Assessment 311 Patterson Office Tower University of Kentucky Lexington, KY 40506 Phone 859.257.7086 Fax 859.323.3999

Or email us: Tara Rose, Assessment Specialist II [email protected] Leah Simpson, Assessment Specialist I [email protected] Natasha Mamaril, Graduate Assistant [email protected] Jill Priesmeyer, Graduate Assistant [email protected]

For more information about the workshops, please email Dr. Marsha Watson at [email protected] or visit our website : (including online registration for the workshops) http://www.uky.edu/IRPE/assessment.html

Date Time

Location Title

MAY

7-May

Friday

1:30-2:45 pm

359 Student Center

Using Evidence to Improve Student Learning:

Part II

The Office of Assessment is asking for article contributions for future issues of Sizzle: What’s Hot in Assessment. We are launching a new section in April where we highlight college and department as-sessment activities. The focus of the section will be on highlighting the variety and strength of assess-ment methods, strategies, plans implemented, and other assessment-related information of interest to your colleagues in UK’s colleges and departments. If you would like to contribute an article highlighting the assessment activities in your college or depart-ment in the May issue, please email your article to the Office of Assessment by May 3, 2010. Thank you.