Bridging the Gap in Internal Quality Matters Course Reviews: Interpreting Trends to Inform Practices

Audience Level: 
All
Session Time Slot(s): 
Institutional Level: 
Higher Ed
Streamed: 
Streamed
Special Session: 
Research
Abstract: 

Many institutions employ the Quality Matters peer review process for online course evaluation. Although the review process itself is standardized, the implementation process and results vary by the institutional context. The current study explores the relationship between scoring trends and the culture and practices at one public land grant university.

Extended Abstract: 

At many higher education institutions, professional development for online instructors includes a routine course evaluation, examining how effective teaching methods are supported by the course design. Online course evaluation has in fact become a benchmark for quality assurance in online learning, and Quality Matters™ (QM) is one of the few nationally recognized providers of research based institutional support in the form of faculty-centered peer reviews. QM’s empirically developed rubrics encourage consistency and incite discussion on the underlying principles of course design and instructional practices for online learning (Baldwin, Chin, & Hsu, 2017). Although QM provides a structured process for implementing course reviews, each institution is ultimately responsible for the implementation of this process and the results of the review. Considering this, the lack of attention to the role that institutional context plays in scoring trends for QM course evaluations is notable in the literature.

The proposed session addresses this gap by exploring the scoring trends for internal QM evaluations over the past five years at one public land grant university in the southwestern United States and bridging these trends to local course evaluation and teacher support procedures. Data from internal QM reviews from 2015-2019 were analyzed in this pilot study, and trends were found in standards which were met and not met. These findings were triangulated with institutional records to chart the development of the review process and assess how internal factors shaped the outcomes of the course evaluations. Preliminary findings will be shared during the presentation, and suggestions will be offered for how to implement a review process and teacher support procedures that facilitate successful outcomes in QM-based internal reviews.

The research that has been done to date on the scoring of QM evaluations encompasses large scale studies of multiple institutions (McMahon, 2016; Zimmerman, 2010, 2013), small scale course comparisons within a single university department (Little, 2009), and comparisons between QM evaluations and student evaluations in online graduate programs (Kwon, 2017). However, most of these studies focus only on the standards that were met or not met, the learner response to particular course design features, or revisions based on course evaluation feedback. Interestingly, the influence of the institutional contexts in which the online courses were developed and evaluated is largely absent in these studies. By highlighting this relationship, the current study shows how even standardized practices can vary in response to local ecologies.

The presentation will present findings and engage attendees by posing questions regarding online course evaluation practices and challenges at their home institutions that they can answer using Slido. Small group reflection will be structured as a “Now what?” question, challenging attendees to see how our findings might be applied to their contexts and goals. This will be followed by a ten-minute share out of their reflection and Q&A session to share opportunities and challenges related to online course evaluation and instructor support. Each audience member will leave the presentation with insight into the role of institutional context in the implementation of a standardized course review procedure. They will also deepen their perspectives on the issue by reflecting on their own institutional practices and how these practices compare with the presenters’ and other attendees’ stories.

Conference Session: 
Concurrent Session 6
Conference Track: 
Research: Designs, Methods, and Findings
Session Type: 
Present and Reflect Session
Intended Audience: 
Administrators
Faculty
Instructional Support
Training Professionals
Technologists
Researchers