notes: what do you do when you do what you do with student ratings?

6

Click here to load reader

Upload: thomas-j-tobin

Post on 22-Feb-2017

147 views

Category:

Education


2 download

TRANSCRIPT

Page 1: Notes: What Do You Do When You Do What You Do with Student Ratings?

CoursEval User ConferenceChicago, IL: September 24-25, 2015

“What Do You Do When You Do What You Do with Student Ratings?”

Thomas J. Tobin, PhD, MSLS, PMPNortheastern Illinois University

[email protected]

What Good Are Student Reviews?

As the consumers of our courses, students are a logical source for feedback on course quality. After all, they are the ones who sit through our classes, week after week (we hope). They are eyewitnesses to our teaching efforts. Anyone who has asked a student knows that they can quickly tell you what was good, or bad, about each course they have taken. While there are many ways to gather information about an instructor’s teaching practice, student reviews are a time-honored mechanism for doing so. In reality, however, no single source of data about teaching practices can provide a complete picture of what is happening in our classrooms.

Did you know?

A review of 50 years of credible, scholarly research on “student evaluation of teacher performance” in higher education revealed the following findings:

Student ratings from multiple classes provide more reliable results than those from a single class, especially when ratings are based on fewer than 10 students.

Ratings of the same instructor across semesters (i.e., same class, different students) tend to be similar.

The instructor, not the course, is the primary determinant of students’ ratings. Students’ ratings of their instructor’s communication, motivational, and rapport-

building skills most closely relate to their overall global rating of that instructor. Student ratings consistently and significantly relate to their level of achievement

of course learning outcomes, their instructor’s self-ratings, administrator and peer ratings, and even ratings by trained observers.

A number of factors are NOT related to student ratings, including the student’s age, gender, year of study, GPA, and personality. Also, time of day and time during the term when ratings are collected are not related to student ratings.

Student ratings of face-to-face and online courses are more similar than they are different.

(Benton & Cashin, 2011)

Page 2: Notes: What Do You Do When You Do What You Do with Student Ratings?

What Are Students Qualified to Review?

Well, nothing. Students aren’t yet good evaluators, but they make great raters.

Students who take our courses are still learning our disciplines. While asking them to provide feedback on our content expertise may seem premature, there are many aspects of college teaching that students are well qualified to address.

Whether sitting in classrooms on our college campuses or logging in to our courses online, students spend more time with our faculty members in a teaching environment than anyone else. Who better than our students to ask how things are going?

In reviews of teaching, whether online or face-to-face, student are typically asked questions about their instructors that fall into the following categories:

Course organization and structure (e.g., “Rate the clarity of the syllabus in stating course objectives, course outline, and criteria for grades.”)

Communication skills (e.g., “Rate the effectiveness of the instructor's explanations of why certain processes, techniques, or formulas were used.”)

Teacher-student interactions (e.g., “Rate the students' freedom to ask questions and express opinions.”)

Course difficulty and student workload (e.g., Rate the instructor's skill in making class materials intellectually stimulating.”)

Assessments and grading (e.g., “Rate the effectiveness of exams in testing understanding and not memorization.”)

Student learning (e.g., “Rate the instructor's skill in emphasizing learning rather than tests and grades.”

Note that students aren’t usually qualified to evaluate the quality or appropriateness of any of these categories. We’d never ask students if we had too few or too many exams, for example: they’d say “too many,” but not from a position of knowing how to assess their own skills—just from a sense of workload.

How Much is Enough?

As you think about the many things you would like to learn from the students in a given course, it will be easy to get carried away. You could quickly find yourself with an evaluation instrument that would take students an hour to complete.

To get the rich, meaningful feedback you desire, you need to limit the questions that you ask. Focus your questions on the key elements of the course you want to learn more about. Consider using Likert-scale items for items like “Rate the instructor’s skill in…” and open-ended response items for areas where you seek more detailed responses, such as “What helped you learn in this course?” Remember, questions are not worth asking if you won’t be able to take the time to carefully review the responses.

2

Page 3: Notes: What Do You Do When You Do What You Do with Student Ratings?

While end-of-course surveys can provide instructors with helpful information that can lead to improved teaching in future offerings, they fail to give direct benefit to the students who complete them. Students, therefore, have a hard time taking them seriously, since they do not see themselves as the beneficiaries of their efforts.

Faculty members can increase their response rates by demonstrating that they genuinely care about, and listen to, the feedback they receive from their students. This is especially important when these surveys are conducted in online courses, where research has shown completion rates tend to be slightly lower than those for surveys given in a face-to-face classroom (Nulty, 2008). One way to demonstrate that level of commitment effectively is to create a culture of feedback in the course by soliciting student feedback long before the end of the course.

Get Formative: The SCARF Model

It is a best practice to design formative evaluation processes to capture student feedback throughout a course—and then make changes that benefit students right away.

Formative student feedback differs markedly from the typical end-of-semester student rating scheme. Design your instruments to ask for student opinions throughout the course period about course pace, instructor presence and communication, and issues that are confusing or unclear to the learners.

Formative feedback is aimed at the improvement of teaching. Because it is not tied to employment decisions like hiring, retention, tenure, and promotion, go ahead and try out different approaches: create broad goal statements; experiment with various instrument designs and methods. To create formative processes, use the SCARF design model (Solicit, Collect data, Adjust, and Return Feedback) for student-feedback systems.

Open-Ended and Closed-Ended Feedback

Such feedback is seldom shared with the institution, and almost never goes into the summative decision-making process. In online courses, formative open-ended feedback is easier to employ than in a face-to-face environment: for example, most learning management systems have survey tools, so there is no chance of handwriting being recognized.

The ease of collecting open-ended feedback for online courses may lead some end-of-semester student-rating instrument designers to include more open-ended questions in order to learn more about an institution’s overall online program. I strongly suggest using multiple-choice or other closed-ended questions for such purposes. Rosenbloom (2014) studied the psychological impact of open-ended feedback in end-of-semester ratings, and found, not surprisingly, that an unconscious bias exists to weigh open-ended responses greater than other types of feedback.

3

Page 4: Notes: What Do You Do When You Do What You Do with Student Ratings?

Thought Exercises

1. At your own institution, what do you want to learn about your own or your colleagues’ teaching from student ratings of teaching effectiveness?

2. Does your campus use a single form for end-of-course student ratings (or perhaps a common mandated set of questions added to departmental or individual feedback)?

3. What questions might you ask students during courses to help faculty members to make responsive changes in-semester?

4. Students are best qualified to rate instructors’

facilitation of learning, communication of ideas, and respect and concern for students.

How might survey instruments on your campus focus on these ratings areas?

5. What people or areas on your campus would need to be involved in order to test and then approve any proposed instruments or changes?

6. What conversations will you bring back to your campus after this conference?

4