Leaderboards and Points: A Tale of Two Social Learning Platforms

Abstract: 

This study examines how leaderboards and points are used in two different social learning platforms; Curatr and Yellowdig. We posited that instructors who most struggled to build an effective points system did so because they viewed points as a control mechanism as opposed to an engagement tool.

Extended Abstract: 

Game mechanics are being touted as an effective way to provide feedback and increase student engagement. As we look at ways to implement game mechanics we must remember that these are tools that must be deployed strategically in order to be effective. In this exploratory study we will examine two learning platforms that incorporate game mechanics in their design.  Both platforms are being used at FHSU and, despite the fact that each platform is being promoted as a social learning platform, they function quite differently.

            Curatr is designed using a MOOC-like structure.  Learning modules are delivered sequentially and the primary content and expectations are very structured.  A module will typically begin by requiring a student to interact with some form of content.  This might be an article, a video, or an interactive activity.  Once the initial interaction is completed the student is asked to respond to a ‘prompt’ related to the learning object.  These responses are public and shared with other members of the learning community.   It is expected that students will comment on, reply to, or up vote others’ comments.  To encourage this behavior Curatr automatically awards points for each of the expected activities.

            Yellowdig is also a social learning platform that also promotes collaboration, sharing ideas, and peer learning. Students earn points for completing/receiving various tasks/feedback: posting, commenting, liking, and tagging; these points can be synced to the Grade Center in learning management system (LMS). The difference between these platforms is in their structure and in the primary focus. In Curatr the focus is on instructor generated content.  In Yellowdig the focus is on student created or curated content.  This simple difference also moves the locus of power. 

            One of the main ways this difference is manifest is in how faculty design the distribution of points. The process in Curatr is fairly straight forward.  Points are primarily awarded when students interact in the expected way with the learning objects the instructor creates.  For instance, if a student watches an assigned video they earn a predetermined number of points.  If they, then, respond to the prompt they earn an additional number of predetermined points.  They can earn even more points if other students comment on, or upvote, the comment they made in response to one of the prompts.

            The process in Yellowdig is different.  A Yellowdig board provides students a more open environment.  There aren’t a finite number of learning objects laid out for students to interact with.   Students don’t need to rely on the instructor to guide then through the content.  They can create any number of posts related to the topic of the course.  As in Curatr, they can earn points for comments they make as well as comments they receive.  Students can also receive points when other students up vote their posts or comments.

            This level of autonomy creates a degree of uncertainty for the instructor.  In the Curatr model they know the total number of potential points that will be available to a student because the points are built into the learning objects they have created.  In Yellowdig, students, if they choose, have an opportunity to rack up a large number of points.  This possibility has many faculty looking for ways to control the point distribution in a Yellowdig board. 

            The current study looks at how the more open-ended knowledge building process afforded by the Yellowdig platform leads many faculty to want to use points as a way to exert more control rather than as a way to provide encouragement through formative social feedback. In this study we engaged with faculty as they attempted to built a point award system for their Yellowdig courses.  We posited that  instructors who most struggled to build a satisfactory points system did so because they viewed points as a control mechanism as opposed to an engagement tool.

Design Frameworks

Leaderboards and points are widely used factors in gamification strategies. Gamification is defined as the use of game factors in non-game environments with the purpose of encouraging users to behave in a certain way (Deterding et al., 2011). Most gamified activities include three basic parts: “goal-focused activity, reward mechanisms, and progress tracking” (Glover, 2013, p. 2000). Educational gamification uses game factors in learning environments. This innovative teaching approach has the positive effect on motivating students to learn in both K-12 and higher education (Erenli, 2013; Hamari et al., 2014; Jensen, 2012; Nah et al., 2014).

Two research questions were used to guide this study:

  1. How are game mechanics being deployed in social learning environments?
  2. Are experience points (XPs) and grades being conflated in the design process?

Methods

This study used both case study and semi-structured interviews to collect data because of the exploratory nature of this research. The Curatr platform was used to develop gamified online training course for the new faculty orientation (NFO) to assist new faculty members in building connections between each other through open discussions and interactive activities on Curatr. A user satisfaction survey, e.g., 5-point agree/disagree Likert Scale survey, was distributed to the faculty to study the usefulness of this social learning platform and the influence of the points and leaderboard.

The Yellowdig platform was piloted by a number of faculty who created a more social and interactive blended and online course environments.  Two instructional designers and ten faculty members were interviewed. The interview questions were open-ended and each interview lasted approximately half an hour. We also reviewed the Yellowdig boards these faculty created to see how the points were deployed and to compare student interactions in each board.

Results

The results gathered from the participating faculty members in the NFO Curatr course have indicated that they enjoyed the gamified experiences and would consider to use this platform more in their future teaching. But the faculty rated the lowest about their experiences with leaderboard in Curatr.

The interview results showed that the faculty used Yellowdig for three purposes: an alternative discussion board other than the built-in discussion board in LMS; a question & answer forum; a social platform for students to share extracurricular learning resources and thoughts with the peers (to earn bonus points in the course). 

Most faculty members set up weekly maximum points as well as a cumulative semester score.  The faculty varied in their ways of setting up the grading system in Yellowdig.  For those using Yellowdig as an alternative to the discussion board, the grading rubrics were very strict. For Q&A forum and bonus point purposes, the grading rubrics were less stringent.

The results of this study also showed that the most difficult part of using Yellowdig was the perceived need by the majority of faculty to have the points coincide with graded formative assessments. The faculty were concerned about the proportion of Yellowdig scores in the students’ overall gradebook. Most faculty members used the exact score that the students earned in Yellowdig to reflect in their gradebooks on LMS.  Most faculty gave little thought to the effect of the leaderboard function when they designed their courses.  Although they considered then for future design as they reflected on their Yellowdig courses.       

Discussion

In the current literature on educational gamification, the effectiveness of leaderboard is very controversial. Some research suggests that this game factor motivates the learners the most (O’Donovan et al., 2013) while some research suggests that leaderboards might only benefit the learners who are aggressive and hardcore players and harmful to the learners who are less competitive and more status-seeking player type (Glover, 2013). The faculty’s feedback on the leaderboard in Curatr supports the current research on gamification.

Betts et al. (2013) studied an online Curatr course and found that the learners who earned the highest XPs were not the ones who earned the highest grades for the assignments, and the learners who earned the lowest XPs were the ones who did the worst on their assignments. Their study shows that the quality and performance of the learners could not be fully reflected by their XPs. The “points” that the learners earned in Yellowdig should be considered as XPs rather than the “grades” in the LMS gradebook.

XPs used in gamified learning activities are the indicator for the learners to track their progress in the levels. It belongs to the progress tracking gear in gamification (Glover, 2013). On the other hand, the points used in a grading scale are the final results to tell the learners that what their levels are. According to Glover (2013), points and leaderboards in the gamified learning environments should be differentiated from the formal assessment because gamification should be an approach to increase learner motivation instead of adding another mechanism for grading learners. By understanding the different functions of XPs (points) and grades, future practitioners and designers should stop matching the grading system in gamified activities with the grading rubrics that are used in traditional learning environments.    

Notes: 

duplicate submission. Withdrawn 11/3/16

Conference Track: 
Pedagogical Innovation
Session Type: 
Education Session
Intended Audience: 
Design Thinkers
Faculty
Instructional Support