Using “Little Data” to Improve the Quality of Online Courses

Audience Level: 
Intermediate
Session Time Slot(s): 
Abstract: 

I conceptualized and pilot tested methods for using a small database of course metadata to improve the quality of online courses by analyzing faculty contact hours, and refining and ensuring consistent build and support processes. I used free tools and have had early success with designers, administrators, and faculty.

Extended Abstract: 

While there is currently much discussion in education about “big data,” in which relationships between learning management and student information systems are analyzed, less attention has been paid to “little data”—metadata about courses, which can be easily collected and analyzed by a small department or individuals. Over the last two years, I used a small database of course metadata to pilot test three different approaches to improving the quality of our online courses: Calculate which course is most in need of editing; analyze faculty contact hours; and standardize instructional design and support tasks across courses.

Our institution’s process is unconventional—while faculty create course content and serve as the subject matter experts, their courses are actually constructed by instructional designers, along with a team of videographers, editors, and proctored-exam coordinators. Furthermore, during course runtime, both students and faculty are supported by dedicated staff. I realize that few institutions have such resources, but the data analyses described below can be performed by anyone.

The first example of using metadata to improve course quality is a report that ranks how much courses have been revised since they were last edited. At our institution all written course content is reviewed by an editor when it is first created or extensively revised, working together with an instructional designer and the faculty member. However, further course iterations that get only minor updates or revisions (defined as less than 20% of the course) don’t trigger an editorial review. The changes add up over time, though, and eventually we have courses running with lots of unedited content, such as misspellings, typos, and other problems. Students expect online course material to be textbook quality, and failing to meet that standard can result in complaints and poor course evaluations.

I used the database to track each course iteration’s revision percentages. Using information that we already had from planning instructional designer workloads, I wrote a query that summed the percentage of revisions for each course since it was last edited. Examining more than two years of course content—1,600 iterations of more than 330 courses—resulted in a ranked list of the courses most in need of editorial review, and also gave us insight into how thoroughly our courses are being revised over time. We caught dozens of courses which had had many small updates but had not had editorial review for several years. We have also used this process to ensure that other quality-control steps, such as usability testing, video closed-captioning, and WCAG accessibility checks, have been performed.

The second example of using metadata to improve course quality is an analysis of faculty contact hours. I used this data set to track how closely courses met requirements by counting asynchronous elements of each course, e.g., the number of discussion questions, quizzes, written words and images, and video duration, as well as scheduled synchronous activities such as review sessions and project presentations. Using constants to convert elements such as words and images into time, I then converted them into asynchronous and synchronous contact hours according to our university policy.

I was able to compare the contact hours across courses and programs, as well as their elements. I used data visualization software to create scatterplots demonstrating ratios of synchronous to asynchronous hours, time spent on course content versus formative and summative evaluation, and hours spent in active versus passive activities. This provided a never-before-seen view of the courses and helped us work with program administrators and faculty to concentrate our revision efforts. The initial results of this approach have been promising, but this is an ongoing project that still needs to be correlated with student satisfaction and success rates. I hope to use further analyses to shift from descriptive information about courses to prescriptive suggestions about how to use this information to improve courses.

The third example of using course metadata is in standardization of instructional design and support tasks across courses in our department. Years ago, instructional designers and support staff would each maintain their own lists of tasks for preparing courses to run or for supporting them during runtime. Unfortunately, this resulted in inconsistent course design and quality. Also, tasks based on personal knowledge, such as remembering that professor X prefers to have her synchronous discussion area labeled “Live Classroom with Professor X” instead of the usual “Live Classroom,” were often not recorded. Thus, with staff turnover they would be lost and have to be relearned.

To address this issue, I added tables of tasks to our database, giving each task a name and description. In addition, special tasks could be assigned by college, course, instructor, program, or term. Tasks can also be hidden by course; for example, courses that don’t have exams can exclude tasks related to exams.

These additional metadata have made the database more powerful and useful. Institutional memory is now retained and used in a consistent fashion, and setting up lists of course tasks for each role is much faster. In addition, we save time and potential confusion by not listing irrelevant tasks, such as setting up Turnitin submission areas for courses that do not have assignments. Also, the very process of defining and compiling a single list of departmental course production and support tasks helped us develop consensus on best practices. It also ensured that we codified tasks that needed to be handed off from instructional designers to support staff, and the newly clarified task list also decreased training time for new employees.

These three examples demonstrate how a small database of course metadata—little data—can be an innovative and valuable tool to improve online courses. While this new approach is still in its infancy, it has already provided useful information, and I expect it to provide more useful results in the future.

In this presentation, participants will discuss these methodologies and share ideas and suggestions for other ways to use this data to improve courses and procedures at their own institutions. Session outcomes will include new ways to look at and make effective use of the course metadata data participants already have in their own programs. Takeaways will include handouts (or downloadable files) of the database schema, queries for reports and performing their own assessments, and methodology for assigning tasks to courses.

Five Elements of Effective Practice

Innovation: Evaluating courses using metadata is an innovation which allows new insight into courses. Making use of the database allows flexibility in what is analyzed or displayed, scalability in the amount of information that can be examined, and automation in generating reports for different audiences.

Replicability: Little data can be used to analyze any sort of course, in any size program or institution. Anyone with rudimentary programming and database skills can set up the database and these tools using free, open-source software. Once the tools are set up, anyone with a web browser can add to the database, enter new data, or run reports.

Impact: So far, the results of this pilot program have improved our monitoring and usage of editorial and quality assurance resources, our recommendations to professors for activities to improve contact hours, and our departmental efficiency. They have helped us to streamline and make more consistent our course development and support procedures, thus allowing us to handle more online courses with the same staff.

Evidence: We have seen proof of the effectiveness of these procedures in the editorial revision reports over time, as well as external measures such as improvements in faculty contact hours and the decreased time it takes to prepare a course for launch.

Scope: The advantage of analyzing metadata is that it is inexpensive and easily available, while still allowing many different ways to examine courses or procedures, depending on the variables collected and how they are analyzed. The basic database schemas work for instructional designers, faculty, and support staff, at institutions of any size.

Connections to OLC’s Five Pillars of Quality Online Education

Learning effectiveness - This approach helps ensure that editorial resources are applied to online course content where they are most needed, that faculty contact hour standards are met, and that our processes for online course development and support are streamlined and consistent, thus leading to the highest possible quality.

Scale - The methodologies described above allow different views of online courses that can be used by and be useful to both small and large institutions, as they are based on easily available, inexpensive “little data.”

Access - The metadata analyses described above help to ensure universal student access to courses. This includes strengthening our systematic development process to ensure consistent navigation, which assists accessibility, as well as double-checking that all videos are closed-captioned and WCAG standards are met.

Faculty satisfaction - The data on contact hours and the relative effectiveness of different techniques is useful to faculty in course design, while improved editorial tracking is also directly helpful to them.

Student satisfaction - Although students do not participate in any of these processes, they benefit from the resulting course improvements.

Position: 
9
Conference Session: 
Concurrent Session 8
Conference Track: 
Research: Designs, Methods, and Findings
Session Type: 
Emerging Ideas Session
Intended Audience: 
Administrators
Design Thinkers
Faculty
Instructional Support
Researchers