Examining learner experiences is important, so this presentation review the findings of a learning experience design (LDX) study, which examined how content (i.e., I.D.) and user experience (UX) supported the achievement of identified outcomes in a virtual STEM peer mentoring training for Ethnically and Racially Diverse Women. Discussion of how findings expand the constructs of LDX will ensue.
Extending the Definition of Learning Experience Design (LDX) Through a Study of a Virtual Mentoring Training for Ethnically and Racially Diverse Women
Abstract
Examining learner experiences is important, so this presentation review the findings of a learning experience design (LDX) study, which examined how content (i.e., I.D.) and user experience (UX) supported the achievement of identified outcomes in a virtual STEM peer mentoring training for Ethnically and Racially Diverse Women. Discussion of how findings expand the constructs of LDX will ensue.
A disparity exists in science, technology, engineering, and mathematics (STEM) degrees and careers among genders and racial populations (National Science Foundation [NSF], 2019). Mentoring is becoming an intervention to promote STEM engagement, matriculation, and persistence for these underrepresented populations (Carlone & Johnson, 2007; Hill et al., 2010; National Academies of Sciences, Engineering, and Medicine [NASEM], 2019), with virtual peer mentoring becoming a mentoring approach that enables women and ethnic and racial minority (ERM) students the opportunity to participate in mentoring (Zambrana et al., 2015). Virtual peer mentoring enables women, and ERM students access to mentors who match their demographic characteristics when otherwise inaccessible in their locations. Virtual mentoring opportunities also provide flexibility and convenience these underrepresented populations often need to access such programs and is a way that higher education institutions in the United States, which often do not have a positive history of accounting for the needs (e.g., busy schedules related to care to give) of women and racial and ethnic minority populations, can provide better access to an activity shown to improve educational success and persistence (NASEM, 2019).
Virtual peer mentoring programs are significantly different from face-to-face ones, particularly in terms of the user interface and learner experience. And, users interact in peer mentoring programs on a variety of smartphones or other Internet-connected devices. Thus, when developing a virtual program, it is commonly considered a best practice to perform a usability or learner experience study prior to the launch of a program.
Examining learner experiences is an important step in the design and development process. Learning Experience Design (LDX), while having no definitive definition, focuses on the creation of a learning experience to support the achievement of learning outcomes (Tawfik et al., 2020). Combining the examination of how content (i.e., I.D.) and user experience (UX), LDX assist designers and educators in understanding how to best deign a learning space and environment, often applying the UX design principles of usefulness, usability, and desirability. Thus, LXD studies are often situated in the Technology Acceptance Model (TAM) (Davis et al., 1989) and the Universal Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh et al., 2003) frameworks. In the original TAM model, researchers purported that perceived usefulness (e.g., utility) and perceived ease of use (e.g., how easy it is to use a system) are determinants of attitude toward and intention to use a technology, or in this case learning environment (Davis et al., 1989). UTAUT aims to explain user intentions to use an information system and subsequent usage behavior (Venkatesh et al., 2003). While these models describe how the utility and easy use of a learning environment are vital to how a learner interacts with said environment, these models fall short as frameworks for LDX as they do not fully describe how learner’s interaction with environment results in his or her ability to achieve identified learning outcomes. Therefore, researcher have proposed that there a need to develop a standard LDX definition and identify specific constructs that frame LDX. Tawfik et al (2020) thus proposed framework and nine LDX constructs based on an LDX study that used a cognitive think-aloud method to examine STEM students, mainly White males, interaction and ability to complete a predetermined tasks to complete on the website, ElectronixTutor. Thus, this study sought to examine these codes and their usage with another population and learning environment to engage in the conversation about developing culturally and gender relevant constructs that describe LDX.
The learning experience design (LDX) study also sought to examine how content (i.e., I.D.) and user experience (UX) supports the achievement of identified outcomes in a virtual STEM peer mentoring training program. This training is to be implemented as part of a mentoring program for ERM women in STEM programs at two Historically Black College or Universities (HBCUs). Also, this study seeks to engage in the conversation about developing culturally and gender relevant constructs that describe LDX. The study aimed to answer the following questions: How does the user describe her interaction with the virtual peer mentoring training interface? How does interaction with the virtual peer mentoring training interface facilitate building self-efficacy and mentoring competencies (e.g., learning experience design)? And, within the peer mentoring training, what constructs emerge to define the learning experience design?
For this study, a snowball sampling method was used to access a sample of 7 ERM woman in a STEM degree program or recently graduated from STEM degree program. All participants were women, five identified as Black or African American, one as Hispanic or Latino, and one as White and Asian.
The researchers employed a remote synchronous usability testing protocol (Fan, 2020), which allowed us to test the training modules on participant’s native devices (e.g., personal computers) while in a laboratory-type setting (e.g., Zoom meetings). Participants used their own computer for the study without any adjustment to their existing operating preferences, similar to field testing methodology. However, we used a controlled environment in order to facilitate the test session and create recordings for data analysis. This was an ideal protocol as the training being tested was ultimately intended to be self-paced and remote (Zhang & Adipat, 2005). Also, the global pandemic caused by COVID-19 created barriers to conducting the usability testing in a laboratory setting where physical proximity could increase health risks. Andreasen, et al., (2007) found, however, that remote testing in real time with a test monitor or researcher created nearly equivalent results to traditional lab-based testing.
The testing was divided into two portions. First, we used a concurrent think-aloud protocol (Ericsson & Simon, 1993). We guided the participants through the concurrent think-aloud protocol with two training modules (, which asked them to verbalize their experience so that the researcher could simultaneously see their movement on the screen and hear their observations or feelings about what they encountered (Ericsson & Simon, 1993; Fan et al., 2019). In this manner, what the researcher saw and heard could be triangulated to strengthen the final data analysis (Fan et al., 2020).
The second semi-structured interview interviews were used to determine the extent participants’ engagement with module tasks and activities promoted development of self-efficacy in STEM as well as competencies as either a mentor or mentee. All testing sessions, both the think-aloud and follow-up interview were completed within 60-105 minutes. Both parts of the study were recorded with Zoom and transcribed for analysis.
Transcripts were examined in light of previously identified codes for learning design experience and via grounded theory coding procedures. Three hundred and twenty-nine comments were initially coded by three of the researchers using the extant Interaction within Learning Environment and Interaction within the Learning Space codes (9 codes total; Tawfik et al., 2020, pp. 14-15) for defining learning experience design. The researchers recognized that the codes did not always apply to the current data set and additional codes or revisions of codes were needed. Therefore, all 329 comments were coded a second time using open coding for any reference to the efficacy, satisfaction, and effectiveness (e.g., competency or self-efficacy development) of the learning experience design. These open codes were discussed and aggregated by the researchers into 10 axial codes. The themes were compared with the nine codes that defined learning experience design according to the earlier Tawfik et al., (2020) study. Through a series of discussions, seven primary codes were agreed upon to describe the learning experience design of users in this study. The researchers conducted a third and final session of blind coding using the seven identified codes. After the seven codes were identified, three of the researchers individually coded each comment. Interrater reliability for this coding was calculated using Cohen's kappa and found to good; inter-rater agreement of above 85 %.
This study and the results will be discussed. We will discuss how they can inform designers, faculty member and others, on the experience of ERM women’s interaction with a learning space and in a learning environment. How the results extend previous research (Tawfik et al., 2020) on the constructs identified to define learning experience design will also be reviewed (see Table 1 below). How the data was analyzed and codes developed will be discussed. A discussion will ensue about defining and developing construct for learning experience design studies, especially in virtual STEM learning environments.
Table 1. Findings
(see https://www.dropbox.com/s/h6y2nlbi8s5lcqr/Table%201.png?dl=0)