developing rubrics presentation

59
Developing Rubrics Dr. Jennifer E. Roberts Coordinator of Academic Assessment Office of Institutional Research, Planning, and Assessment Northern Virginia Community College

Upload: nguyenthuy

Post on 05-Feb-2017

227 views

Category:

Documents


0 download

TRANSCRIPT

Developing Rubrics

Dr. Jennifer E. Roberts Coordinator of Academic Assessment

Office of Institutional Research, Planning, and Assessment Northern Virginia Community College

Identify Student Learning

Outcomes

Curriculum Mapping

Methods of Assessment

Gather Evidence

Use Results Assessment

at NOVA

Steps to Assess Student Learning Outcomes 1. Identify student learning outcomes for your program

2. Determine practices used to achieve outcomes through curriculum

mapping

3. Determine methods of assessment

4. Gather evidence

5. “Close the loop”

• Review and interpret results

• Recommend actions

• Make changes

• Measure effectiveness of changes

Student learning outcomes for today’s workshop

After attending today’s workshop, you will be able to: 1. Describe what a rubric is.

2. Discuss various types of rubrics.

3. Identify components of and steps to developing

a rubric.

4. Construct a rubric.

What is a rubric? • What exactly is a rubric?

• A standardized scoring guide

• Identifies important criteria and levels of success for each criterion

• Describes qualitative as well as quantitative differences

• Generally used to assess assignments, projects, portfolios, term papers, internships, essay tests, performances, etc. – “performance” assessment

From Pickering, “Creating Rubrics & Prompts”

Presenter
Presentation Notes
This slide covers the basics of what a rubric is. Qualitative = poorly, well, excellent Quantitative = 1-3 errors, 4-6 errors, 7+ errors Creating Rubrics & Prompts J. Worth Pickering Old Dominion University Jean M. Yerian Virginia Commonwealth University Re-Opening the Assessment Toolbox VAG Spring Drive-In Workshop March 18, 2005

What is a rubric? • A table that identifies and describes various levels of

student performance for each of a set of criteria.

• A method of rating student work in a more objective manner.

• A kind of scorecard that breaks down a written or demonstrated assignment into manageable, observable pieces.

• A way to consistently assess student work to determine whether a program is meeting its goals.

http://www.web.virginia.edu/iaas/assessment

Presenter
Presentation Notes
Highlighted here in green are some additional key terms for defining rubrics. Consistency across just one instructor (the beginning of the grading session vs. 3 hours later or vs. 2 days later), and consistency across several instructors The “Nuts and Bolts” of Implementing An Assessment Plan http://www.web.virginia.edu/iaas/assessment

What is a rubric? • A scheme for evaluating student work along certain

dimensions • Specific skills or aspects of a learning outcome • Concrete descriptors of levels performance

• Good for measuring higher-order skills or outcomes

not easily measured by multiple-choice tests (e.g., oral communication, integration)

From Kasimatis, “Scoring Rubrics”

Presenter
Presentation Notes
Highlighted here in green are some additional key terms for defining rubrics. The previous workshop discussed how rubrics are usually more appropriate for assignments that require more than a right or wrong answer, but rather a judgment about quality. Scoring Rubrics Margaret Kasimatis, PhD VP for Academic Planning & Effectiveness Loyola Marymount University – L.A.

What is a rubric? • It is a way of organizing criteria to systematically

determine if the outcome is met based on data.

• Used when a judgment of quality is required

• It is a pre-defined scheme for the evaluation process, the subjectivity involved in evaluating an essay becomes more objective.

From Zelna, Rubrics 101: A Tool to Assess Learning

Presenter
Presentation Notes
Highlighted here in green are some additional key terms for defining rubrics. A rubric defines a continuum of quality. Rubrics 101: A Tool to Assess Learning Carrie Zelna, Ph.D. Director of Student Affairs Planning, Assessment, Research and Retention NC State University

Advantages of Scoring Rubrics • Good for measuring higher-order skills or evaluating

complex tasks.

• Summaries of results can reveal patterns of student strengths and areas of concern.

• Can be unobtrusive to students.

• Can generate great discussions of student learning among faculty, especially regarding expectations.

• Grading is more objective, unbiased, and consistent.

Presenter
Presentation Notes
Finding patterns in student achievement or student errors can be helpful. Using a rubric to assess student learning does not have to affect the assignment itself and therefore would not affect students. However, some find it useful to provide students with the rubric ahead of time so that they know what is expected of them. Scoring Rubrics Margaret Kasimatis, PhD VP for Academic Planning & Effectiveness Loyola Marymount University – L.A.

Rubrics can be used to assess: • Essays/Papers • Projects • Labwork • Presentations • Exam questions • Capstone projects • Exhibits • Performances • Portfolios of student work • Artwork • Internships

Types of Scoring Instruments • Checklists: list of grading criteria that are

completed/present

• Rating Scales: includes a continuum for scoring

• Holistic Rubrics: has overall description of the entire product/performance rather than the components

• Analytic Rubrics: levels of performance are described for each criterion

Presenter
Presentation Notes
2 categories 1. Checklist – if criteria is present 2. Rating scale/rubric – how well the criteria is done

Types of Scoring Instruments

Mertler, Craig A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation, 7(25).

Presenter
Presentation Notes
Graphic for previous slide. Mertler, Craig A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation, 7(25).

Checklists

Checklists are an appropriate choice for evaluation when the information that is sought is limited to the determination of whether specific criteria have been met.

See handout #1 – checklist for writing SLOs From Moskal, 2000

Presenter
Presentation Notes
 When SLO Lead Faculty submitted their program’s SLOs, a checklist was used to evaluate the statements. The checklist did not address the quality of the criteria, but rather if the criteria were met or not.   http://pareonline.net/getvn.asp?v=7&n=3 Moskal, Barbara M. (2000). Scoring rubrics: what, when and how?. Practical Assessment, Research & Evaluation, 7(3).

Checklists

Checklists are an appropriate choice for evaluation when the information that is sought is limited to the determination of whether specific criteria have been met.

Scoring Rubrics Scoring rubrics are based on descriptive scales and support the evaluation of the extent to which criteria has been met. From Moskal, 2000

Presenter
Presentation Notes
http://pareonline.net/getvn.asp?v=7&n=3 Moskal, Barbara M. (2000). Scoring rubrics: what, when and how?. Practical Assessment, Research & Evaluation, 7(3).

Holistic Rubrics • When there is an overlap between the criteria set for

the evaluation of the different factors, a holistic scoring rubric may be preferable to an analytic scoring rubric. In a holistic scoring rubric, the criteria is considered in combination on a single descriptive scale (Brookhart, 1999).

• Holistic rubrics require the teacher to score the overall process or product as a whole, without judging the component parts separately (Nitko, 2001).

Presenter
Presentation Notes
http://pareonline.net/getvn.asp?v=7&n=3 Moskal, Barbara M. (2000). Scoring rubrics: what, when and how?. Practical Assessment, Research & Evaluation, 7(3). http://pareonline.net/getvn.asp?v=7&n=25 Mertler, Craig A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation, 7(25).

Holistic Rubrics

• Holistic rubrics are customarily utilized when errors in some part of the process can be tolerated provided the overall quality is high (Chase, 1999).

• The focus of a score reported using a holistic rubric is on the overall quality, proficiency, or understanding of the specific content and skills (Mertler, 2001).

See handout #2 – Holistic Rubric

Presenter
Presentation Notes
http://pareonline.net/getvn.asp?v=7&n=25 Mertler, Craig A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation, 7(25). Retrieved October 21, 2009 from

Analytic Rubrics • Analytic rubrics result initially in several scores,

followed by a summed total score - their use represents assessment on a multidimensional level (Mertler, 2001).

• Scores for separate, individual parts of the product or performance.

Presenter
Presentation Notes
http://pareonline.net/getvn.asp?v=7&n=25 Mertler, Craig A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation, 7(25).

Analytic Rubrics • In order to give feedback to students regarding how

to improve their performance, rubrics provide descriptions at each level regarding what is expected.

• Each score category should be defined using

descriptions of the work rather then judgments about the work (Brookhart, 1999).

• Judgment: "Student's calculations are good."

• Description of work: "Student's mathematical calculations contain no errors.”

See handout #3 – Analytic Rubric

Presenter
Presentation Notes
http://pareonline.net/getvn.asp?v=7&n=3 Moskal, Barbara M. (2000). Scoring rubrics: what, when and how?. Practical Assessment, Research & Evaluation, 7(3).

Steps in

Developing Rubrics

Steps in Developing a Rubric 1. Decide if one is measuring the presence of criteria

or the quality of criteria. • Presence = Checklist

• Quality = Rubric

Steps in Developing a Rubric 2. Determine what the evaluation criteria should be.

• Break SLO into manageable parts.

• Identify observable attributes of the SLO.

• Decide on the criteria that are essential to

demonstrating achievement of the SLO.

• Criteria will often number between 3-8.

Break SLO into Manageable Parts Some examples: Leadership: communication, decision making, motivation, etc. Sportsmanship: cooperate with officials, remain calm when interacting with opposite team, no foul language, etc. Active Listening Skills: Sits leaning slightly forward, makes eye contact, nods, asks open ended questions, etc. Problem Solving Skills: Identifies the problem, identifies the available options, able to recognize the consequences for each option, etc. From Zelna, Rubrics 101: A Tool to Assess Learning

Presenter
Presentation Notes
Rubrics 101: A Tool to Assess Learning Carrie Zelna, Ph.D. Director of Student Affairs Planning, Assessment, Research and Retention NC State University

Steps in Developing a Rubric 3. Determine what the performance levels should be

and how many. • Consider the anchors first - best and worst. • Then determine how many different levels in

between so that each level is still distinct from the next.

• Number of levels usually between 3-5. • Use both qualitative terms (see next slide) and

quantitative (point value) for performance levels. See handout #4 – Labels for Performance Levels

Presenter
Presentation Notes
Some prefer to have an even number of performance levels, thus eliminating the tendency to go with the middle level when not sure. The more levels you have, the more detailed the results will be when looking at trends. For instance, if a rubric has 3 levels, the difference in average scores may not be as distinct as when a rubric has 5 or 6 levels and therefore improvements or decreases in performance may not be as noticeable with fewer levels. That was VCCS’s logic to have 6 levels for the rubric used to assess written communication.

Steps in Developing a Rubric 4. Provide descriptions for each level.

• For holistic rubrics, write thorough narrative descriptions

incorporating each criteria into the description.

• For analytic rubrics, write description of performance levels for each individual criteria.

• Be consistent with terminology and the means by which the

criteria are evaluated.

• Use non-judgmental terminology.

Presenter
Presentation Notes
Consistency is addressed in following slides.

Consistency Across Performance Levels • Consistency of the Attributes in Performance

Criteria Descriptors: “Although the descriptor for each scale point is different from the ones before and after, the changes concern the variance of quality for the (fixed) criteria, not language that explicitly or implicitly introduces new criteria or shifts the importance of the various criteria.” (Wiggins, 1998)

• The performance levels provide a continuum of quality as relates to specific components of the criteria.

Presenter
Presentation Notes
A rubric should not mention components of a criterion in one level that do not get mentioned in the other levels. http://pareonline.net/getvn.asp?v=9&n=2 Tierney, Robin & Marielle Simon (2004). What's still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2).

Example of Inconsistent Performance Criteria and Correction for Science Journal Performance Criteria

Novice 1

Apprentice 2

Master 3

Expert 4

Problem Criterion Science Journal Writing is

messy and entries contain spelling errors. Pages are out of order or missing.

Entries are incomplete. There may be some spelling or grammar errors.

Entries contain most of the required elements and are clearly written.

Entries are creatively written. Procedures and results are clearly explained. Journal is well organized presented in a duotang.

Presenter
Presentation Notes
Here is an example of a rubric with various components of being an effective science journal. See next slide for more on components. http://pareonline.net/getvn.asp?v=9&n=2  Tierney, Robin & Marielle Simon (2004). What's still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2).

Example of Inconsistent Performance Criteria and Correction for Science Journal Performance Criteria

Novice 1

Apprentice 2

Master 3

Expert 4

Problem Criterion Science Journal Writing is messy

and entries contain spelling errors. Pages are out of order or missing.

Entries are incomplete. There may be some spelling or grammar errors.

Entries contain most of the required elements and are clearly written.

Entries are creatively written. Procedures and results are clearly explained. Journal is well organized presented in a duotang.

messy - spelling – pages – entry completion – grammar – clarity – creativity – procedures/results – organization/presentation

Presenter
Presentation Notes
Each new component is highlighted in a new color. There appears to be 9 components of an effective science journal, yet not all are addressed in which performance level. An instructor could find it confusing what messy writing is – is that messy organization? is that messy sentence structures? When grading for assessment purposes, those questions should not come up. See next slide for revised example. http://pareonline.net/getvn.asp?v=9&n=2  Tierney, Robin & Marielle Simon (2004). What's still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2).

Suggested Correction for Consistent Performance Criteria

Performance Criteria Novice

1 Apprentice

2 Master

3 Expert

4 Breadth: The required elements are present for each journal entries (e.g. Lab Summary, Materials, Procedure, Results, Conclusion).

Few of the required elements are present in each journal entry.

Some of the required elements are present in each journal entry.

Most of the required elements are present in each journal entry.

All the required elements are present in each journal entry.

Clarity: The entries are clearly written (e.g. style, grammar enhance understanding).

Journal entries are slightly clear.

Journal entries are moderately clear.

Journal entries are mainly clear.

Journal entries are extremely clear.

Organization: The journal is organized (e.g. visible titles, ordered pages, etc.)

The journal is slightly organized.

The journal is moderately organized.

The journal is mainly organized.

The journal is extremely organized.

Presenter
Presentation Notes
With this rubric, what an effective science journal means is first broken down into 3 main criteria: breadth, clarity, and organization. The criteria cover the components from the previous slides. The first criterion relates to how complete the journal is, the second to the writing style (which is further defined in the criterion), and the third to organization (again, additional details of what organization means are supplied in the criteria column). Qualitative terms are used here to differentiate the various performance levels (e.g.,. few – some – most – all). http://pareonline.net/getvn.asp?v=9&n=2  Tierney, Robin & Marielle Simon (2004). What's still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2).

Example of Inconsistent Performance Criteria for Assessing Silent Reading Skills

Performance Criteria

Emerging 1

Developing 2

Achieving 3

Extending 4

Problem Criterion

Silent Reading Off task and disruptive during sustained silent reading period.

Has difficulty choosing books for sustained silent reading.

Reads independently during sustained silent reading.

Chooses books with enthusiasm and reads independently during sustained silent reading.

Presenter
Presentation Notes
Here is another example of a rubric with inconsistent terminology. See next slide for more info. http://pareonline.net/getvn.asp?v=9&n=2  Tierney, Robin & Marielle Simon (2004). What's still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2).

Example of Inconsistent Performance Criteria for Assessing Silent Reading Skills

Performance Criteria

Emerging 1

Developing 2

Achieving 3

Extending 4

Problem Criterion

Silent Reading Off task and disruptive during sustained silent reading period.

Has difficulty choosing books for sustained silent reading.

Reads independently during sustained silent reading.

Chooses books with enthusiasm and reads independently during sustained silent reading.

Task-oriented – disruptiveness – ease of choosing books – independent reading – enthusiasm for choosing books

Presenter
Presentation Notes
Each new component is highlighted in a new color. There appears to be 9 components of an effective science journal, yet not all are addressed in which performance level. Again, an instructor may find some descriptions confusing. For example, when it comes to choosing books, are “difficulty” and “enthusiasm” referring to the same thing? Can a student choose a book with enthusiasm but still find the choice challenging? Can a student choose a book with ease but with no enthusiasm. When grading for assessment purposes, those kinds of questions should not come up. See next slide for revised example. http://pareonline.net/getvn.asp?v=9&n=2  Tierney, Robin & Marielle Simon (2004). What's still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2).

Suggested Correction for Consistent Performance Criteria : 1. If reading ability is the target, rethink the criterion to ensure that the attribute is meaningful. 2. If learning behaviors are being measured, and autonomy and attention are the desired attributes, reword the descriptors as shown below.

Performance Criteria

Emerging 1

Developing 2

Achieving 3

Extending 4

Autonomy and Attention: Student reads independently and stays on task during a silent reading period.

Student seldom reads independently and stays on task for little of the time during a period of silent reading.

Student sometimes reads independently and stays on task some of the time during a period of silent reading.

Student usually reads independently and stays on task most of the time during a silent reading period.

Student always reads independently and stays on task all of the time during a silent reading period.

Presenter
Presentation Notes
It was decided that this rubric criterion would concentrate on how autonomous the students read and how task-oriented they are. Additionally, perhaps it was decided that evaluating the difficulty or the enthusiasm with which a student chooses a book was not easily measurable. http://pareonline.net/getvn.asp?v=9&n=2  Tierney, Robin & Marielle Simon (2004). What's still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2).

Guiding Questions for the Rubric Construction Process

1.Are all the performance criteria explicitly stated? Are the performance criteria present in the rubric those intended? Is there anything that is implicitly expected in the students’ products or performances that is not stated in the rubric?

2.Are the attributes consistently addressed from one level to the next on the progression scale? Is the rubric addressing the same attributes for each student’s product or performance across the levels? Does the value of the attribute vary in each level descriptor, while the attribute itself remains consistent across the scale levels?

From Tierney & Simon, 2004.

Presenter
Presentation Notes
When examining the consistency of a rubric, review these two questions. http://pareonline.net/getvn.asp?v=9&n=2  Tierney, Robin & Marielle Simon (2004). What's still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2).

Activity

Developing a rubric step-by-step.

Activity - Step 1 for All Rubrics

1. Decide if one is measuring the presence of criteria

or the quality of criteria.

To evaluate: chocolate chip cookies = Quality of criteria

Activity – Step 2 for an Analytic Rubric Criteria

Score

Number of chocolate chips

Texture

Color

Taste

Richness (flavor)

Presenter
Presentation Notes
Step 2 – Determine the evaluation criteria. http://www.teachervision.fen.com/teaching-methods-and-management/rubrics/4522.html#why_rubrics

Activity – Step 3 for an Analytic Rubric Criteria Poor

1 Needs

Improvement 2

Good 3

Delicious 4

Score

Number of chocolate chips

Texture

Color

Taste

Richness (flavor)

Presenter
Presentation Notes
Step 3 – Determine the performance levels. You could first think of the best and worst and then how many levels are in between. For instance, the worst cookies result in a reaction of “yuck,” the best cookies result in a reaction of “mouth-watering.” In between there could be just one more level (with a reaction of “heh, it’s ok”), or there could be two more levels (the second-best results in reaction of “pretty good, I will continue to eat these,” and the second-worst level results in a reaction of “I don’t need to spit it out, but I can take or leave these cookies”). http://www.teachervision.fen.com/teaching-methods-and-management/rubrics/4522.html#why_rubrics

Activity – Step 4 for an Analytic Rubric Criteria Poor

1 Needs

Improvement 2

Good 3

Delicious 4

Score

Number of chocolate chips

Too few or too many chips

Chocolate chip in every bite

Texture Texture resembles a dog

biscuit Chewy

Color Burned Golden brown

Taste Store-bought flavor,

preservative aftertaste – stale,

hard, chalky

Home-baked taste

Richness (flavor) Nonfat contents

Rich, creamy, high-fat flavor

Presenter
Presentation Notes
Step 4 – Provide descriptions for each level. Again, it may be helpful to start with the best and the worst. http://www.teachervision.fen.com/teaching-methods-and-management/rubrics/4522.html#why_rubrics

Activity – Step 4 for an Analytic Rubric Criteria Poor

1 Needs

Improvement 2

Good 3

Delicious 4

Score

Number of chocolate chips

Too few or too many chips

Chocolate in 50% of bites

Chips in about 75% of bites

Chocolate chip in every bite

Texture Texture resembles a dog

biscuit

Texture either crispy/crunchy or 50% uncooked

Chewy in middle, crisp on

edges Chewy

Color Burned

Either dark brown from overcooking

or light from undercooking

Either light from overcooking or light from being

25% raw

Golden brown

Taste Store-bought flavor,

preservative aftertaste – stale,

hard, chalky

Tasteless Quality store-bought taste

Home-baked taste

Richness (flavor) Nonfat contents Low-fat contents

Medium fat t t

Rich, creamy, hi h f t fl

Presenter
Presentation Notes
Step 4 – Provide descriptions for each level. And then fill in the remaining levels. http://www.teachervision.fen.com/teaching-methods-and-management/rubrics/4522.html#why_rubrics

Analytic Rubric – Check for Consistency Criteria Poor

1 Needs

Improvement 2

Good 3

Delicious 4

Score

Number of chocolate chips

Too few or too many chips

Chocolate in 50% of bites

Chips in about 75% of bites

Chocolate chip in every bite

Texture Texture resembles a dog

biscuit

Texture either crispy/crunchy or 50% uncooked

Chewy in middle, crisp on

edges Chewy

Color Burned

Either dark brown from overcooking

or light from undercooking

Either light from overcooking or light from being

25% raw

Golden brown

Taste Store-bought flavor,

preservative aftertaste – stale,

hard, chalky

Tasteless Quality store-bought taste

Home-baked taste

Richness (flavor) Nonfat contents Low-fat contents

Medium fat t t

Rich, creamy, hi h f t fl

Presenter
Presentation Notes
And check for consistency. Make sure that each component of a performance criterion gets addressed at each level. http://www.teachervision.fen.com/teaching-methods-and-management/rubrics/4522.html#why_rubrics

Activity – Step 2 for a Holistic Rubric Criteria

Number of chocolate chips

Texture Color Taste

Richness (flavor)

Presenter
Presentation Notes
Step 2 – Determine the evaluation criteria. http://www.teachervision.fen.com/teaching-methods-and-management/rubrics/4522.html#why_rubrics

Activity – Step 3 for a Holistic Rubric Level Description

Poor 1

Needs Improvement

2 Good

3

Delicious 4

Presenter
Presentation Notes
Step 3 – Determine the performance levels. You could first think of the best and worst and then how many levels are in between. For instance, the worst cookies result in a reaction of “yuck,” the best cookies result in a reaction of “mouth-watering.” In between there could be just one more level (with a reaction of “heh, it’s ok”), or there could be two more levels (the second-best results in reaction of “pretty good, I will continue to eat these,” and the second-worst level results in a reaction of “I don’t need to spit it out, but I can take or leave these cookies”). http://www.teachervision.fen.com/teaching-methods-and-management/rubrics/4522.html#why_rubrics

Activity – Step 4 for a Holistic Rubric Level Description Poor

1 Too few or too many chocolate chips ; Texture resembles a dog biscuit Burned; Store-bought flavor with a preservative aftertaste – stale, hard, chalky; Non-fat contents

Needs Improvement

2 Good

3

Delicious 4

Chocolate chip in every bite; Chewy; Golden brown; Home-baked taste; Rich, creamy, high-fat flavor

Presenter
Presentation Notes
Step 4 – Provide descriptions for each level. Again, it may be helpful to start with the best and the worst. http://www.teachervision.fen.com/teaching-methods-and-management/rubrics/4522.html#why_rubrics

Activity – Step 4 for a Holistic Rubric Level Description Poor

1 Too few or too many chocolate chips ; Texture resembles a dog biscuit; Burned; Store-bought flavor with a preservative aftertaste – stale, hard, chalky; Non-fat contents

Needs Improvement

2

Chocolate chips in 50 percent of the bites taken; Texture is either crispy/crunchy from overcooking or doesn't hold together because it is at least 50 percent uncooked; Either dark brown from overcooking or light from undercooking; Tasteless; Low-fat content

Good 3

Chocolate chips in about 75 percent of the bites taken; Chewy in the middle, but crispy on the edges; Either brown from overcooking, or light from being 25 percent raw; Quality store-bought taste; Medium fat content

Delicious 4

Chocolate chip in every bite; Chewy; Golden brown; Home-baked taste; Rich, creamy, high-fat flavor

Presenter
Presentation Notes
Step 4 – Provide descriptions for each level. And then fill in the remaining levels. http://www.teachervision.fen.com/teaching-methods-and-management/rubrics/4522.html#why_rubrics

Holistic Rubric – Check for Consistency Level Description Poor

1 Too few or too many chocolate chips ; Texture resembles a dog biscuit; Burned; Store-bought flavor with a preservative aftertaste – stale, hard, chalky; Non-fat contents

Needs Improvement

2

Chocolate chips in 50 percent of the bites taken; Texture is either crispy/crunchy from overcooking or doesn't hold together because it is at least 50 percent uncooked; Either dark brown from overcooking or light from undercooking; Tasteless; Low-fat content

Good 3

Chocolate chips in about 75 percent of the bites taken; Chewy in the middle, but crispy on the edges; Either brown from overcooking, or light from being 25 percent raw; Quality store-bought taste; Medium fat content

Delicious 4

Chocolate chip in every bite; Chewy; Golden brown; Home-baked taste; Rich, creamy, high-fat flavor

Presenter
Presentation Notes
And check for consistency. Make sure that each component of a performance criterion gets addressed at each level. The following are addressed in all levels and no components are addressed in only some of the levels. Number of chocolate chips (pink) – Texture (green) - Color (blue) - Taste (purple) -Richness (flavor) (red) http://www.teachervision.fen.com/teaching-methods-and-management/rubrics/4522.html#why_rubrics

Improve Validity of Rubric • Validity refers to the degree to which the evidence supports that these

interpretations are correct and that the manner in which the interpretations are used is appropriate (American Educational Research Association, American Psychological Association & National Council on Measurement in Education, 1999).

• Ensure that the rubric is producing evidence for intended SLOs.

• Write appropriate prompts/directions for the assignment.

• Number each criteria and then write the number of the criteria by the part

of the prompt for which that criteria is used. • One can see if any criteria does not address the prompt, or if part of

the prompt does not have an explicit criteria.

Presenter
Presentation Notes
As your program works on its rubrics, it will see to improve the validity of the rubrics. This is to ensure that you are testing what you want to be testing.

Improve Validity of Rubric Content Construct

1. Do the evaluation criteria

address any extraneous content?

2. Do the evaluation criteria of the scoring rubric address all aspects of the intended content?

3. Is there any content addressed in the task that should be evaluated through the rubric, but is not?

1. Are all of the important facets of the intended construct evaluated through the scoring criteria?

2. Is any of the evaluation criteria irrelevant to the construct of interest?

Presenter
Presentation Notes
Questions to ask to improve validity

Improve Reliability of Rubric 1. Evaluators should meet together for a training session. 2. One or more examples of student work should be

examined and scored. 3. Discuss the scores and make decisions about conflicts that

arise. 4. More than one faculty member should score the student

work. 5. If two faculty members disagree significantly (more than 1

point on a 4 point scale) a third person should score the work.

6. If frequent disagreements arise about a particular item, the item may need to be refined or removed.

7. Provide sample papers to instructors with completed rubrics.

http://www.web.virginia.edu/iaas

Presenter
Presentation Notes
Your program will also want to make sure that the rubric results are consistent across instructors.

Testing Your Rubric • Use a rubric to evaluate your rubric. • Ask a colleague/peer to review the rubric and

provide feedback. • Ask a student to review the rubric and provide

feedback. • Test out on a few student samples. • Use multiple raters to norm the rubric.

Presenter
Presentation Notes
Once your program has developed a rubric, the program will want to ensure that it is as effecitve as possible. The program could evaluate the rubric using a rubric (see both the holistic and analytic rubrics in handouts). The program could ask someone who was not involved with the development of the rubric to review it, test it out, and provide feedback. Students could also test-drive the rubric. They may notice where it is unclear how the work is graded. The program may do a small pilot or simply test out the rubric by applying it to a small number of samples. More than one person should be involved in testing out the rubric.

Collecting, Reviewing, Interpreting Data • To collect data:

• Consider pre-programming a spreadsheet so data can be entered and analyzed during the reading and participants can discuss results immediately.

• To review the data:

• Average scores across raters (if you used two raters). • Aggregate those scores across students for each criterion. • Aggregate total score across students. • Review frequencies for each criterion. • Present data in user-friendly way and have discussion of what it means.

• To interpret data:

• Determine how good is good enough. • At least 80% of the students score at least 3. • No more than 5% of students are at the “unsatisfactory” level and at

least 80% are at the “good” level. • The average score for each criterion must be at least 3 (out of 4).

Presenter
Presentation Notes
After your program has developed a rubric that has been evaluated to be effective (thus completing Step #3 in the assessment loop), the program will be ready to move to Step #4 of the loop, gathering the evidence. The make the gathering as efficient as possible, consider various means for getting the results turned in. For example, an Excel spreadsheet would provide consistency in format and would make consolidating the data easier. For Step #5 of the assessment loop, the program will review and interpret the data. If the program uses more than one rater, the average score for each sample could be entered into the spreadsheet. The program will want to look at the average overall score, but it is also important to review the average score for each criteria. This is particularly helpful if the results are not meeting the program’s expectations; the program can then review where things broke down. Some may find it helpful to see not just the average scores, but how many samples scored at each level for the criteria. No matter which data is reviewed, it is important to use a user-friendly method and to discuss the results. If a program gathers the data but does not discuss the result, the loop has not been completed and all that hard work was wasted. When interpreting the data, the program will have to decide on a target achievement level (or criteria for success). The target level could relate to a percentage of students performing at a certain level (or levels) or to a minimum average score.

How to Start a Rubric • How might you start to formulate a rubric?

• Search literature and Internet for existing rubrics – we’ll see some of these later on.

• Use examples of students’ work to identify defining criteria and levels of competence.

• Work with a committee or consult with colleagues. • See if your professional association has either a rubric bank or

performance standards/criteria.

How to Enhance a Rubric • How do you fine tune your use of rubrics?

• Initial practice (especially if multiple raters) • Establishment of inter-rater reliability • Detection of scoring errors and biases • Development of exemplars and anchors • Assignment of weights for criteria

From Pickering & Yerian, 2005

Presenter
Presentation Notes
Now that you know the steps to designing a rubric and what has to be considered in that design, you should be ready to start your own rubric. How can you start? You can google “rubric” and the SLO (or components of the SLO) the program is assessing. See what your faculty are already doing – invite them to provide some examples of how they grade student work and then use those examples as a basis for defining criteria and performance levels. The SLO Lead Faculty should not be doing this on his/her own, but rather should be working with colleagues – see if any one is interested. While there will be time invested, the instructors could end up with a very effective grading tool that will result in efficient grading for their own courses. Programs can also check with its associations to see what it suggests. Associations for various fields (such as psychology, engineering, and business) have webpages devoted to assessment. A program may also want to enhance a rubric. This often relates to validity and reliability. It’s also helpful to have a small group of samples – the next time your program assesses a particular SLO, a new group of faculty may be involved and having samples would provide guidance. A program may also determine that some criteria are more important than others and will want to explore methods of weighing criteria. Creating Rubrics & Prompts J. Worth Pickering Old Dominion University Jean M. Yerian Virginia Commonwealth University

Converting to Grade for Course

From Mertler, 2001

Sample grades and categories Rubric Score Grade Category

8 A+ Excellent 7 A Excellent 6 B+ Good 5 B Good 4 C+ Fair 3 C Fair 2 U Unsatisfactory 1 U Unsatisfactory 0 U Unsatisfactory

Presenter
Presentation Notes
We have stressed the importance of making the most of these assessments. Instructors may decide to use them for their course grading. The number of levels does not usually coincide with the same number of grades. For instance, if you have a scale with 4 levels, a score of 2 does not mean 50% and therefore failure, but rather the need for improvement and a passing fair performance. http://pareonline.net/getvn.asp?v=7&n=25  Mertler, Craig A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation, 7(25).

Suggestions for Using Rubrics in Courses 1. Hand out the grading rubric with the assignment so students will know

your expectations and how they'll be graded. This should help students master your learning outcomes by guiding their work in appropriate directions.

2. Use a rubric for grading student work and return the rubric with the grading on it. Faculty save time writing extensive comments; they just circle or highlight relevant segments of the rubric. Some faculty include room for additional comments on the rubric page, either within each section or at the end.

3. Develop a rubric with your students for an assignment or group project. Students can then monitor themselves and their peers using agreed-upon criteria that they helped develop. Many faculty find that students will create higher standards for themselves than faculty would impose on them.

Presenter
Presentation Notes
Here are some general suggestions for getting the most out of your rubrics. These relate not to program assessment necessarily, but instead to individual faculty using the rubrics for their courses. Give students the rubric ahead of time and they will know how their work will be evaluated. Return the student work with the rubric for detailed feedback on various criteria. A student may find it helpful to see where his/her grade broke down and, conversely, where he/she did well. Include students in determine the criteria and the levels of performance. An instructor will get insight into what students think of essential components of an assignment and acceptable perfomance levels. http://www.sacscoc.org/institute/2008/Monday,%20July%2028,%202008/Allen-Developing%20&%20Using%20Rubrics.pdf Developing and Using Rubrics for Assessing, Grading, and Improving Student Learning Mary J. Allen SACS-COC Summer Institute July 28, 2008

Suggestions for Using Rubrics in Courses 4. Have students apply your rubric to some sample products before they

create their own. Faculty report that students are quite accurate when doing this, and this process should help them evaluate their own products as they are being developed. The ability to evaluate, edit, and improve draft documents is an important skill.

5. Have students exchange paper drafts and give peer feedback using the rubric, then give students a few days before the final drafts are turned in to you. You might also require that they turn in the draft and scored rubric with their final paper.

6. Have students self-assess their products using the grading rubric and hand in the self-assessment with the product; then faculty and students can compare self- and faculty-generated evaluations.

Presenter
Presentation Notes
Students can use the rubric for some self-assessment. Students could also use the rubric for peer evaluation. 6. Students could include the rubric they did for their own assignment with the actual student work. The student could then compare how they evaluate their own work with how the work is actually evaluated by the instructor for the course. http://www.sacscoc.org/institute/2008/Monday,%20July%2028,%202008/Allen-Developing%20&%20Using%20Rubrics.pdf Developing and Using Rubrics for Assessing, Grading, and Improving Student Learning Mary J. Allen SACS-COC Summer Institute July 28, 2008

Student learning outcomes for today’s workshop

After attending today’s workshop, you will be able to: 1. Describe what a rubric is.

2. Discuss various types of rubrics.

3. Identify components of and steps to developing

a rubric.

4. Construct a rubric.

Presenter
Presentation Notes
Hopefully after this workshop you will be able to achieve the outcomes described here.

http://www.nvcc.edu/about-nova/directories--offices/administrative-offices/assessment/loop/index.html

Presenter
Presentation Notes
For more resources on all steps of the assessment loop, please go to our resource page: http://www.nvcc.edu/about-nova/directories--offices/administrative-offices/assessment/loop/index.html You can also find workshop presentations: http://www.nvcc.edu/about-nova/directories--offices/administrative-offices/assessment/resources/index.html

Future Workshops

• Classroom Assessment Techniques (CATs)

• “Closing the Loop” – Using Results to

Enhance Student Learning

• WEAVEonline – online management tool

Questions?

Contact: Dr. Jennifer Roberts Coordinator of Academic Assessment Office of Institutional Research, Planning, and Assessment 703-323-3086 [email protected]

Developing Rubrics

Dr. Jennifer E. Roberts Coordinator of Academic Assessment

Office of Institutional Research, Planning, and Assessment Northern Virginia Community College

Identify Student Learning

Outcomes

Curriculum Mapping

Methods of Assessment

Gather Evidence

Use Results Assessment

at NOVA