We extracted five variables related to students’ engagement in a non-proctored online assessment platform to classify students. Cluster of students differed in performance in examinations which provides evidence for association between learners’ behavior and performance. Results may help instructors provide personalized feedback to learners and reflect on design of quizzes
Background
Summative evaluation in education considers only the final learning outcomes, without considering the learning processes of learners, while formative assessments aid to promote further learning in students (i.e., formative assessments are used to advance learning rather than to test student performance). For Blended Learning (BL) courses, formative assessments could be effective to improve the quality of the course since these assessments provide timely and ongoing information about student learning to instructors.
It is important that instructors understand how students engage and perform in formative assessments. This information can not only let instructors know if the student is doing the assigned task, but it can also help them understand if the student is using the materials to optimize their learning. Using this information, instructors could provide students with feedback to help them study more productively and optimize their learning. However, when students work in electronic environments informal monitoring by an instructor is usually not possible. Instructors are often left to intuition regarding students’ engagement (including study behaviors and patterns of self-regulation) and therefore are unware of how students are engaged in studying and using the course materials. When teaching and learning happened in conventional classrooms, before technology-enhanced medium like a learning management system (LMS), learning perceptions and attitudes of learners were collected via face-to-face observations or potentially through questionnaires. We propose that these observations can be amplified by learning analytics data that is collected through online learning platforms like the learning management system. We believe a thoughtful LMS design and subsequent analysis of these available trace data is important for the following three reasons.
- Students are primarily responsible for their learning in environments such as those created within an LMS. For example, although most LMSs have a quizzing tool, when faculty develop and place quizzes in the LMS students take those quizzes at their own discretion. Learners who lack metacognitive and self-regulatory skills may not learn much from formative assessments implemented in open-ended learning environments without the implementation of scaffolds to guide them self-regulate their learning process. (Azevedo, 2005). Present-day LMSs can support learner–centered activities, personalized instruction, and immediate feedback to the learners (Zhang, Zhao, Zhou, & Nunamaker, 2004). Despite this potential, many instructors currently use LMSs simply as a delivery mechanism for the course materials. We believe that teachers should be rethinking how they use their LMS systems, considering how they design online learning to form student learning behaviors and to collect meaningful behavioral data related to student engagement. This data can be used to identify patterns related to successful or less-successful behavior and to provide formative feedback to the learners. This will also aid in implementing effective teaching and learning interventions aimed at pedagogical improvement (Clow, 2012; Wise,2014).
- Researchers like Winne (2005) have pointed out the inadequacy of using self-reports in studying Self-Regulated Learning (SRL) and the need for tracking data from empirical activities to model SRL. This is because learners who not very reliable observers, overestimate the learning strategies they implement as they engage with tasks (Jamieson-Noel & Winne, 2002; Winne, Jamieson-Noel, & Muis, 2002).
- Existing learning analytics (LA) studies have mostly relied upon data-driven techniques to extract useful patterns and information from large-scale educational datasets (Siemens & Baker, 2012). Many researchers believe a lack of theoretical background of data-driven approaches may be inadequate in providing insights into the development of educational theory and practice (Choi et al., 2016; Rogers, Gašević, & Dawson, 2016; Wise, 2014; Wise & Shaffer, 2015). They advocate that to advance research and practice in LA, there exists a strong need to interlink analytics with learning theory, research and practice. One of the criticisms of using metrics such as simple behavioral data including data elements such as number of clicks, time spent online, or number of files viewed is that these features lack the power to contribute to the understanding of student learning in higher education (Lodge & Lewis, 2012). Existing LA tools might not be able to differentiate students who engage with study material and content in an in-depth and critical way from students who engage only superficially. Stated simply, evidence does not support the conclusion that merely engaging more with an LMS improves learning outcomes. Furthermore, if the feedback provided prompts the students to use the learning platform more, making judgements solely based on the metrics mentioned above, such an intervention may slow down student’s learning.
Purpose
The purpose of this study is to explore the quiz-logs to gather data-derived evidence related to learning behavior and to examine the relationship between learning practices and performance on subsequent exams.
Methods
Question 1: What features (data-derived evidence of learning behaviors) can be gathered from LMS quiz-logs that can provide insights into student quiz use? How can students be classified based on their learning behavior with the quizzing platform?
To answer the first question, we began exploring the LMS quiz-logs and created five features related to students’ critical engagement with the quizzing system. These features explained in detail below were designed based on learning theory and results from previous published studies.
Spacing
Previous laboratory and field experiments have demonstrated that dividing study time into many sessions is often superior to massing study time into few sessions (Mcdaniel, Thomas, Agarwal, Mcdermott, & Roediger, 2013). This feature captures the number of days into which the student has distributed her practice.
Closeness to due date
There is evidence from in class-based environments and Computer-Based Learning Environments (CBLEs) that procrastination as a behavior that has negative effects on academic achievement (Cerezo, Esteban, Sánchez-Santillán, & Núñez, 2017). The feature ‘closeness to due date’ captured how early the student has attempted the quiz with respect to the deadline.
Time-On-Task and Off-Task Behavior
According to Carroll’s Time-On-Task hypothesis (Carroll, 1989), a student who spends longer time engaging with the learning materials has more opportunities to learn. This implies student who spends a greater fraction of time off-task (engaged in behaviors where learning from the material is not the primary goal) will spend less time on-task and learn less. Off-task behavior is captured by a feature called “page blur” captured within CANVAS. Page blurs occur when the current page of focus during the quizzing activity becomes inactive for a long duration. This could be when the student has left the quizzing system or is engaged in browsing other tabs. A feature called ‘page blur frequency’ was built which is the frequency of page blurs (i.e., total number of page blurs divided by the total time a student spends on the quizzing system).
Number of Attempts
Previous studies show that students’ course performance have increased significantly with increased practice attempts (mean: first attempt – 58.3%; high attempt – 89.6%; p < 0.0001) in courses which used optional learning modules to encourage test-enhanced learning (Horn & Hernick, 2015). Unsupervised clustering, k-means clustering, was performed with the identified features that captured and explained the most about the groups behaviors. k was chosen as 3 as higher numbers did not yield meaningful clusters with the available sample size.
Question 2: How do instructors respond to findings related to students’ learning behaviors?
This is ongoing work. We intend to present data-derived behaviors from the LMS to the instructors and to interview them about how they would use these insights to inform students or improve their pedagogy.
Results
We found significant correlations between quiz-usage behaviors and short-term (midterm exam which immediately followed the quiz) and long-term learning (final exams).
Correlations between short-term learning and off-task behavior - r (133) = -.243, p < .01; long-term learning and off-task behavior - r (133) = -0.261, p < .01; short-term learning and early start of the quizzes - r (133) = .283, p < .01; long-term learning and early start of quiz - r (133) = 0.276, p<.01.
The three clusters were identified (using K-means clustering)
- high engagement (minimum off-task behavior and early start of the quiz)
- medium (mixed behavior)
- low engagement groups (maximum off-task behavior and started the quiz closer/later than the deadline).
Exam scores within these clusters trended in the expected direction. Across sections and quizzes highly engaged groups scored maximum in midterms and final exams; low engagement groups scored least.
Discussion
Previous studies show that learner self-reflection is associated with increased learning (Merceron & Yacef, 2005). Our study provides learners with an opportunity to reflect on their engagement with online formative assessments. Additionally, students may need constant external feedback from instructors to successfully engage in formative assessments. We theorize that, since the independent variables we have chosen in this study are guided by learning theory, the results of the study may be more informative and actionable. This means educators can intervene and suggest recommendations to make changes in learners’ behavior. Feedback from assessments maybe used by learners and teachers to improve shortfalls in learning, teaching and curriculum. Previous studies show that learning design influences academic performance (Rienties, Toetenel, & Bryan, 2015). We expect that the instructors may use the results to examine the current design and implementation of quizzes and/or consider alternate designs.