Using Assessment to Improve Peer Review Feedback

Audience Level: 
All
Session Time Slot(s): 
Institutional Level: 
Higher Ed
Streamed: 
Streamed
Special Session: 
Research
Abstract: 

As teachers, we rely on peer review to help students improve their writing. But how do we ensure peer comments are actually helpful? This study provides detailed guidelines for how to make shorter comments (21-40 words) more productive.

Extended Abstract: 

Students need high-quality feedback to improve their writing (e.g. Reid, 2014; Vardi 2008). Peer feedback is a scalable solution (Nicol et al., 2014). 

To coach multiple peer feedback activities in accelerated online writing courses, instructors need a clear framework. First, instructors, and students, benefit from guidelines to assess if students are contributing too little to help their peers or themselves. Instructors, and students, also require a more robust set of models for recognizing when reviewers’ comments discuss criteria that lead to improved writing. Online technologies promise to capture all the texts generated during students’ peer review, but the peril lies in making sense of that data. 

In this study, we relied on research data analytics from Eli Review (elireview.com), a peer learning and revision app developed by writing professors at Michigan State University. Seven sections of ENGL101 and 24 sections of ENGL102 used Eli’s online platform to complete three projects each during 2019-2021 at a large, research intensive, state university in the southwest.  During the 7.5 week, fully online terms, students completed four formative feedback and revision activities per course. 

We used quantitative and qualitative methods to analyze the 13,717 comments exchanged during peer learning. Our quantitative analysis describes peer norms in comments based on word count. Word count is a blunt measure that indicates how likely a comment is to describe a problem, evaluate it, and offer a suggestion (Hart-Davidson and Meeks, 2020). Prior research establishes that comments shorter than 20 words tend to be praise or corrections, and comments longer than 41 words tend to have enough information to persuade writers to make global revisions. Comments between 21-40 words fall in the “messy middle.” This program-wide corpus of peer feedback has the following distribution by comment length: 

  • 27% of comments were shorter than 20 words (likely praise or correction)
  • 39% of comments had 21-40 words 
  • 24% of comments had 41+ words (likely long enough to persuade a writer to revise).

To better understand this “messy middle,” we conducted additional qualitative analyses to describe comment length norms across assignments and a qualitative analysis of over 4,000 student peer comments having 21-40 words. Our aim was four-fold: 

  1. gain more confidence in the quality of comments of this length
  2. curate comment models that reflect student language
  3. reflect on effects on review task design in influencing comment length and modify assignments accordingly
  4. establish word count indicators for each assignment based on program norms that can guide instructors' interventions in future terms

Shorter peer review comments (21-40 words) may be useful for the reviewee; this study provides more detailed guidelines for making them more productive.  

Interactivity 

Besides providing a classic IMRAD style presentation (Introduction, Methods, Results and Discussion), we will also prompt attendees to interact with our information and data; specifically we will prompt attendees to:

  1. Provide feedback on a document;
  2. Code their feedback using some of our coding schemas; and
  3. Outline peer review assignment prompts they might use in future assignments.

Takeaways 

By the end of the session, attendees will have access to:

  • Reference list about peer review;
  • Peer review assignment prompt guidelines; and
  • Outlines towards peer review prompts for future classes. 

References

Hart-Davidson, B. & Meeks M.G. (2020, forthcoming). Feedback analytics for peer learning: Indicators of writing improvement in digital environments. In Improving Outcomes: Disciplinary Writing, Local Assessment, and the Aim of Fairness, edited by Norbert Elliot and Diane Kelly-Riley, MLA.

Nicol, D. et al. (2014). Rethinking feedback practices in higher education: A peer review perspective” Assessment & Evaluation in Higher Education, 39(1) pp. 102–22. CrossRef, http://doi.org/10.1080/02602938.2013.795518 

Reid, E. S.  (2014). Peer review for peer review’s sake: Resituating peer review pedagogy. In Peer Pressure, Peer Power: Theory and Practice in Peer Review and Response for the Writing Classroom. Eds. Steven J. Corbett, Michelle LaFrance, and Teagan E. Decker. Texas: Fountainhead Press,, pp. 217-231.

Vardi, I. (2008). The Relationship between Feedback and Change in Tertiary Student Writing in the Disciplines. International Journal of Teaching and Learning in Higher Education, 20(3): pp. 350-361.

 
Conference Session: 
Concurrent Session 5
Conference Track: 
Research: Designs, Methods, and Findings
Session Type: 
Education Session
Intended Audience: 
Administrators
Faculty
Instructional Support
Training Professionals
Technologists
Researchers