Best Practices and Challenges of In-Person and Online Student Data Collection

Audience Level: 
All
Session Time Slot(s): 
Institutional Level: 
Higher Ed
Abstract: 

Are you interested in learning about the difference in feedback collected between traditional courses and online courses?  Drawing on over 800 MAP assessments, our research compares the nature and quality of student feedback gathered via in-person visits (77%) and online surveys (23%), using traditional content analysis and computerized text analysis (CTA).

Extended Abstract: 

In 2014, our teaching center at X University (XU) began offering a mid-semester assessment program (MAP) for instructors, with the goal of supporting them by collecting formative feedback about their teaching while there was still time for them to course-correct before the term ended. MAPs have been shown to improve student perceptions, motivation, and satisfaction regarding teaching and learning, while instructors find it a credible and useful improvement process (Clark & Redmond, 1982; Dangel & Lindsay, 2014; Diamond, 2004; FInelli, Pinder-Grover, & Wright, 2011; Sozer, Zeybekoglu, & Kaya, 2019). Typically, MAPs incorporate an in-person classroom visit by a facilitator to collect feedback about the course from students (Clark & Redmond, 1982; Dangel & Lindsay, 2014; Diamond, 2004; Finelli et al, 2011). However, distance learning (online/off-campus), large-lecture, and evening classes present challenges for in-person data collection. Our university uses online surveys to collect MAP data in these contexts which allows us to offer the program to all instructors, regardless of their teaching modality, class size, classroom location, or time of the day they teach. However, there are few research studies that examine methodological differences in processes for mid-semester assessment, including data collection. Drawing on this, our research explores the differences in the nature and qualities of the feedback collected by these two methods and the implications for decision-making around MAPs.

 Our center has completed 809 MAP assessments over 10 semesters with 23% conducted via online surveys (n=189). Our research compares student feedback collected by both methods in terms of its nature (what aspects of teaching or learning are visible in the feedback) and qualities (number of comments, number of words, and semantic content). Our data analysis methods include content analysis carried out by the research team as well as the use of Linguistic Inquiry Word Count (LIWC), a validated computerized text analysis (CTA) tool, to assist with identifying semantic qualities around cognition, emotion, motivation, and perception associated with the vocabulary of the feedback (Pennebaker, Boyd, Jordan & Blackburn, 2015).

Participants will understand the affordances and limitations of both data collection methods for MAPs and will be able to use this knowledge to support decision-making that is appropriate to their context and resources while simultaneously promoting inclusive participation in MAPs. They will also gain exposure to the use of CTA in SoTL research and insight into how CTA can benefit researchers by reducing the time needed to analyze data and by detecting patterns of thought or meaning that might otherwise have gone unnoticed.

After general introductions and an informal survey of participants’ experiences with MAPs, our presentation will introduce XU’s existing MAP model, followed by a discussion of our research findings. We will then lead an interactive discussion about how these findings can inform evidence-based decisions around data collection for MAPs and how participants can utilize similar approaches on their campuses or within their departments to close the distance between students and instructors.

Clark, D., and Redmond, M. (1982). Small group instructional diagnosis. ERIC Document Reproduction Service No. ED217954.

Dangel, H., & Lindsay, P. (2014). What Are Our Students (Really) Telling Us? The Journal of Faculty Development, 28(2), 27-33.

Diamond, M. R. (2004). The usefulness of structured mid-term feedback as a catalyst for change in higher education classes. Active Learning in Higher Education, 5(3), 217-231.

Finelli, C.J., Pinder-Grover, T., & Wright, M.C. (2011). Consultations on teaching. Using student feedback for instructional improvement. Advancing the culture of teaching at a research university: How a teaching center can make a difference, 65-79.

Pennebaker, J.W., Boyd, R.L., Jordan, K., & Blackburn, K. (2015). The development and psychometric properties of LIWC2015. Austin, TX: University of Texas.

Sozer, E.M., Zeybekoglu, Z., & Kaya, M. (2019). Using mid-semester course evaluation as a feedback tool for improving learning and teaching in higher education. Assessment & Evaluation in Higher Education, 1-14.

Conference Session: 
Concurrent Session 9
Conference Track: 
Teaching and Learning Effectiveness
Session Type: 
Education Session
Intended Audience: 
Faculty
Training Professionals