LMS Course Design as Learning Analytics Variable

Audience Level: 
Intermediate
Session Time Slot(s): 
Institutional Level: 
Higher Ed
Strands (Select 1 top-level strand. Then select as many tags within your strand as apply.): 
Abstract: 

To better understand instructor impact on student engagement and academic performance, I’ll present a plausible approach to operationalizing existing definitions of LMS course design. I'll share statistical findings using such an approach; discuss related issues and opportunities; and describe next steps including reverse engineering effective course redesign practices.

Extended Abstract: 

Note: My proposed presentation is based on part of my recently defended dissertation, "Using Analytics to Encourage Student Responsibility for Learning and Identify Effective Course Designs That Help" (Fritz, 2016), that I distilled and presented as a pre-conference workshop "short paper" (5 pages) at the 2016 Learning Analytics and Knowledge (LAK) conference in Edinburgh, Scotland, available as part of the pre-conference workshop proceedings at http://ceur-ws.org/Vol-1590/paper-03.pdf. I've excerpted my short paper (without references) below:

Introduction of Problem

Given wide spread use of the learning management system (LMS) in higher education, it is not surprising this form of instructional technology has frequently been the object of learning analytics studies. While methods and results have been mixed in terms of predicting student success, let alone leading to actual, effective and scalable interventions, there is one potential LMS analytics variable that has received comparatively little attention: the role of course design.

Part of the problem is how to operationalize something as theoretical, subjective or varied as instructor pedagogy. Indeed, Macfadyen and Dawson attributed variations in “pedagogical intention” as a reason why the LMS could never serve a “one size fits all” dashboard to predict student success across an institution. Similarly, Barber and Sharkey eliminated theoretical student engagement factors such as self-discipline, motivation, locus of control and self-efficacy because they were “not available” (i.e., quantifiable) in the LMS data set, which was their primary object of analysis. Basically, how does one quantify course design that seems qualitatively different from usage log data like logins?

Despite these operational challenges, some of the most frequently cited LMS analytics studies referenced above actually provide a surprisingly uniform characterization of course design that can be roughly broken down into three broad, but distinct categories:

1.User & Content Management (e.g., enrollment, notes, syllabi, handouts, presentations)

2.Interactive tools (e.g., forums, chats, blogs, wikis, announcements)

3.Assessment (e.g., practice quizzes, exams, electronic assignments, grade center use)

If we are willing to accept LMS course design as an aspect of instructor pedagogy – and accept student LMS activity as a proxy for attention, if not engagement – then it may be possible to use one to inform the other. Specifically, patterns of student LMS behavior around tools or functions could retroactively shine light on implemented course design choices that align with the broad, research-based LMS course design types described above.

For example, if students in one course appear to use the online discussion board more than students in another course, could one reasonably assume that instructors of the two courses varied at least in their conceptual value and effective use of this interactive tool? Perhaps this is evident by how instructors differ in their weighting or reward for the discussion board’s use in the course’s grading scheme, or model and facilitate its use, or simply enable it as a tool in the LMS course’s configuration. Admittedly, the challenge is determining how much variance in student LMS course usage is statistically significant or attributable to and indicative of instructor course design. For assessment purposes, though, these three broad LMS course design types (content, interaction and assessment) provide at least a theoretical way to operationalize variability in faculty LMS course design and usage.

While there may be a default institutional LMS course configuration most instructors blindly accept, in trying to explain why one tool or function is used by students more in one course vs. another, it seems odd that we shouldn’t be able to consider the pedagogical design choices of the instructor as an environmental factor that may impact student awareness, activity and engagement. True, this may also reflect an instructor’s capability or capacity to effectively express his or her pedagogy in the LMS, but to simply ignore the possible impact of course design on student engagement seems un-necessary and disingenuous if we want to use learning analytics to predict and hopefully intervene with struggling students. If students who perform well use the LMS more, do we not want to know what tools, functions and pedagogical practices may facilitate this dynamic? 

Solution & Method

Despite the striking similarity in how several LMS-based analytics studies have categorized LMS course design practices (if not pedagogical intent), what’s needed is a plausible, systematic approach to operationalize these common definitions.

Conveniently, Blackboard used these same research-based definitions of course design for its Analytics for Learn (A4L) product. Specifically, A4L’s “course design summary” is a statistical comparison of a Bb course’s relative, weighted item count compared to all courses in a department and the institution based on the three major item types found in the LMS analytics literature. Essentially, all items in any Bb course, such as documents or files, discussions or chats, and assignments or quizzes, are grouped into 1) content, 2) interactive tools or 3) assessments. Then, A4L’s course design summary uses a simple algorithm to categorize all courses into department and institutional statistical quartiles through the following process:

1.Sum all course items by primary Item Type (e.g., Content, Tools, Assessments).

2.Multiply the group total using a weighting factor (wf): Content (wf = 1), Interaction (wf = 2) and Assessments (wf = 2).

3.Statistically compare each course to all other courses in the department and all other courses across the entire institution.

4.Tag each course with a quartile designation for both the department and institution dimension.

Again, the “course design summary” is already provided in A4L and is really just a way of categorizing how a course is constructed, compared to all courses in the department and across the institution, not necessarily if and how it is actually used by students. To understand and relate student activity to course design, we need to calculate a similar summary of student activity from existing A4L measures.

Student Activity Summary

Bb Analytics 4 Learn (A4L) contains several student activity measures that include the following:

  • Course accesses after initially logging into the LMS;
  • Interactions with any part of the course itself, equivalent to “hits” or “clicks”;
  • Minutes using a particular course (duration tracking ends after 5 minutes of inactivity);
  • Submission of assignments, if the instructor uses assignments;
  • Discussion forum postings, if the instructor uses discussions.

However, for calculating the companion student activity summary to correlate with A4L’s course design summary, I have only used the first three measures (accesses, interactions and minutes) because ALL courses generate this kind of student activity, regardless of design type. Not all instructors use electronic assignments or discussion forums, but short of simply dropping a course, all students generate at least some activity that can be measured as logins, clicks or hits and duration.

To calculate the student summary, we must first convert each raw activity measure to a standardized Z-score, which shows how many standard deviations and in which direction a particular raw score is from the mean of that measure in a normal distribution of cases. Because the scale of each activity varies greatly during a semester (e.g., accesses or logins could be under one hundred, interactions or hits could be in hundreds and duration or minutes could be in the thousands), converting these variables to Z-scores allows us to compare and summarize them across measures more efficiently. It also allows us to identify and remove outliers, which for this purpose is defined as scores greater than three (3) standard deviations from the mean.  

Findings 

The participants for my study were all first-time, full-time, degree-seeking, undergraduate freshmen or transfer students starting their enrollment in Fall 2013. According to the UMBC Office of Institutional Research and Decision Support (IRADS), this included 2,696 distinct students (1,650 freshmen and 1,046 transfers) or 24.48% of all 11,012 degree-seeking undergraduates.

Generally, students who performed well academically in courses and a given term overall, showed a higher, statistically significant (p < .001) use of Bb compared to peers who did not perform as well. Specifically, using logistic regression to control for other factors such as gender, race, age, Pell eligibility, academic preparation and admit type, students were 1.5 to 2 times more likely to earn a C or better in Fall 2013 and Spring 2014, respectively. Similarly, students were 2.4 to 2.8 times more likely to earn a 2.0 term GPA in Fall 2013 and Spring 2014, respectively.

(Please see my LAK16 short paper referenced above or at http://ceur-ws.org/Vol-1590/paper-03.pdf for further about the impact of course design on student LMS activity and academic performance).

 

 

 

Conference Session: 
Concurrent Session 12
Session Type: 
Education Session - Research Highlights