pedagogical and technological interventions in peer review (shivers-mcnair) #cwcon #i2

13
Pedagogical and Technological Interventions in Peer Review Ann Shivers-McNair University of Washington

Upload: annshiversmcnair

Post on 14-Aug-2015

263 views

Category:

Education


1 download

TRANSCRIPT

Pedagogical and Technological

Interventions in Peer Review

Ann Shivers-McNairUniversity of Washington

Who

Me: a PhD student in language and rhetoric at University of Washington, former writing instructor and basic writing coordinator at the University of Southern Mississippi (@a_shiversmcnair)

Eli Review: teacher-researchers at Michigan State University who designed a web app to support peer learning through feedback and revision (@elireview)

Collaboration as intervention

A Janus- faced approach: researchers should “attend to both user and researcher needs simultaneously in the design, development, and analysis” of instruments and data. Furthermore, a “Janus-faced approach also advocates for researchers’ collaboration with programmers and statisticians who can help facilitate the kind of broad, detailed data collection that will ultimately benefit the field and better legitimize our work to those outside of writing studies.” (Lauer, Blythe, & McLeod, 2013)

Collaboration process

Identifying shared goals

User research

Feedback

New features and new questions

Shared goals

My goals: 1) Moving past peer review “teacher lore,” 2) understanding commenting practices on levels of discourse and genre, 3) finding “big data” ways to study peer review

Eli Review’s goals: 1) Improving their ability to surface useful formative data based on student participation in writing, review, and revision tasks, 2) learning how to deliver raw performance data to researchers interested in studying their own practice.

Phase 1: Experimenting with raw data

Phase 2: Mockups and feedback

Outcome 1: New research questions

Quantifying “Helpful”

What is the relationship between comment word counts and comment helpfulness ratings? Are longer comments more or less likely to be given a “5” for helpfulness?

This is a rough mock-up of a statistical test comparing the distribution of helpfulness ratings in “short” comments (defined here as less than 35 words) and “long” comments (greater than 35 words). Because the sample is small, a Wilcoxon test was used.

Outcome 1: New research questions

Qualitative Understandings of “Helpful”:

What are some common discourse features of contextual comments that are highly rated (5) for helpfulness? What are some common features of end comments that are highly rated? Are those features specific to reviews, or are there more generic features that characterize highly-rated comments? To what extent do highly rated comments take up discursive features of teacher feedback?

Outcome 2: New app features

Outcome 2: New app features

Outcome 2: New app features

Implications

Reciprocity: Eli Review got my feedback and use case as they

worked to build features for a larger audience I played in active role in designing a feature to help

me and other teacher-researchers

Reflection: We both strongly believe in advocating for reflective teaching and design practice, and our collaboration shows the possibilities for integrating those practices rather than viewing them as separate