Finding the sweet spot when blending online and face-to-face learning

Audience Level: 
Intermediate
Institutional Level: 
Higher Ed
Strands (Select 1 top-level strand. Then select as many tags within your strand as apply.): 
Abstract: 

In this session I will address the question about whether there is an optimal mix of online and face-to-face time that will result in the highest average course performance. To address the question the literature on student performance in blended learning overall is reviewed followed by an analysis of the literature on performance under different blends of online and face-to-face time. The session will be of interest to researchers as well as those involved teaching or desiging blended courses.

Extended Abstract: 

Over the past several years a consensus has emerged that students, on average, perform better in blended courses as compared to those in fully online and face-to-face courses. Stated differently, students in courses with no online time and those with 100% online do not perform as well as those in blended courses where there is a mix somewhere in between these two extremes. This finding raises the central question explored in this paper: is there an optimal mix of online and face-to-face time that will result in the highest average course performance? To address the question the literature on student performance in blended learning overall is reviewed followed by an analysis of the literature on performance under different portions of online time.

Evidence of the effectiveness of blended learning overall comes from the evaluation of performance of a large number of students over an extended period at the University of Central Florida (UCF) and from four major meta-analyses. At UCF Moskal, Dziuban, and Hartman (2013) report that using a sample of 913,688 students 91% of students in blended courses across campus in a variety of subject areas received a grade of “C” or higher, whereas the success rate for both fully online and face-to-face courses was approximately 88%.

The most widely-cited meta-analysis was carried out by Means, Toyama, Murphy, and Baki (2013). They found that performance in blended and fully online classes together was significantly higher than in face-to-face classes (g+ = .20, p < .001). When effect sizes were calculated for blended and fully online separately, blended course performance was higher than face-to-face (g+ = .35, p = .001), while the difference between fully online and face-to-face was not significant. A similar effect size was found by Bernard, Borokhovski, Schmidt, Tamin, & Abrami (2014) who found that performance in blended courses was significantly higher than in face-to-face courses (g+ = .33, p = .001). Spanjers, Könings, Leppink, Verstegen, de Jong, Jeroen Katarzyna, & Merriënboer (2015) compared blended and traditional learning in courses that used objective or subjective measures of performance. Their findings favoured blended learning overall, and revealed an effect size for objective measures that was slightly higher than for subjective measures (g+ = .34, p < .05 versus g+ = .27, p < .05). The fourth meta-analysis of interest was carried out by Vo, Zhu, and Diep’s (2017) who compared student performance in blended courses to traditional courses in STEM versus non-STEM fields. The researchers found an overall effect size in favour of blended learning over traditional instruction (g+ = .385, p < .001); however, a more significant effect over traditional instruction was found for STEM courses (g+ = .496) versus non-STEM courses (g+ = .210). Thus these four meta-analyses together found an average effect size of 0.35 for blended learning compared to face-to-face. This effect size suggests that the average student in the blended learning group exceeds 68% of students in the face-to-face group, a difference that most faculty would consider very meaningful.

Very little research has been done examining the proportion of time spent online in a blended course related to a performance. Zhao, Lei, Lai, & Tan (2005) in a meta-analysis that examined 51 studies comparing online and face-to-face courses was the first to raise the issue. The researchers found no significant difference between the two modes of learning, however they also coded the studies in the sample on several additional variables, one of which was “media involvement” (p. 1848). The coding was from 1 (no technology used) to 10 (instruction was delivered completely with technology). By media involvement the researchers meant the amount of time devoted to online activities in a regular face-to-face course. They found that performance in studies classified as having a “medium” media involvement (i.e., coded from 6 to 8) was significantly higher when compared to face-to-face instruction (d = .50. p < .001); studies with a “high” media involvement (i.e., coded 9 to 10) had a smaller yet still significant effect size (d = .07, p <.001) (p. 1860). An implication of Zhao et al.’s study is that performance in courses with between 60% and 80% online is higher than in courses where more time is spent online.

Two of the meta-analyses cited above also addressed the issue of amount of time spent online in blended courses. Means et al. (2013) coded time-on-task on two values in their meta-analysis: equal amount of time spent online and face-to-face (i.e., 50% online), and more time online than face-to-face (i.e. > 50% online). Their findings approached significance (Q = 3.62, p = .06) favouring more time online. Bernard et al. (2014) also examined the portion of time spent online as a moderating variable in their study of 117 effect sizes. They considered two categories, up to 30% of course time and 30% to 50% of course time (they did not consider over 50% of time). They found a “definite trend” suggesting that more online course time results in higher achievement (Q = .47, p = .49), but it was not significant (p. 112). The researchers, when looking at both their data and the data from Means, found the difference promising enough to recommend further studies to examine the effects of amount of time spent online in blended courses.

Following Bernard et al.’s (2014) recommendation, Owston and York (2017) compared the performance of students in four different blends: Low (27% to 30% online), Medium (36% to 40% online), High (50% online), and Supplemental blends (100% face-to-face plus weekly online tutorial sessions). Using a sample of 2106 students across 20 undergraduate social sciences and humanities courses, they found that those in the High and Medium blends performed significantly better than their peers in the other two blends, but no difference was found between the High and Medium blends.

No other studies were identified that compared student performance in different proportions of online and face-to-face time, however there is some evidence about the proportion of time that students prefer. Owston and York (2017) found a modest but significant relationship between blend proportion and student perceptions. Students in the Medium and High blends tended to have the most favourable perceptions of blended learning compared to their peers in the Low and Supplemental blends. When given the choice of whether to attend a traditional university calculus course or an experimental blended version of it that had all lectures online but they were free to attend in-class lectures as well, Asarta and Schmidt (2015) reported that students chose to attend an average of 50% of face-to-face classes.

In summary, research suggests that proportions between 36% (i.e., Owston and York, 2017) and 80% (Zhao et al., 2005) of online time appear to result in the highest performance in bended courses. Both Means et al. (2013) and Bernard et al. (2015) suggest at least a 50% blend was necessary but the findings only approached significance. Therefore, on the basis of existing performance evidence as well as student preferences, one can conclude that about a 50% mix of online and face-to-face would appear to be most appropriate for enhancing student performance. More research is needed to test this hypothesis. Ideally, such research should examine the full range of blends from 36% to 80%.

In the full version of this paper more details on existing research will be provided. The conclusions of this study will have direct implications for practice. They will provide faculty and course designers with guidance on how much time might be devoted to online activities in a blended course. This guidance will serve as a point of departure for the design of blended courses from which modifications can be made depending on a variety of specific local circumstances including subject area, instructor, preferences, and student needs.

Throughout the presentation I will interact with the audience by asking them to post comments and questions on http://todaysmeet.com and pause periodically to respond and discuss. I will also welcome oral questions from the audience and will encourage discussion with participants about their experiences using different blends of online and face-to-face.

References

Asarta, C. J., & Schmidt, J. R. (2015). The choice of reduced seat time in a blended course. Internet and Higher Education, 27, 24-31. http://dx.doi.org/10.1016/j.iheduc.2015.04.006

Bernard, R. M., Borokhovski, E., Schmid, R. F., Tamim, R. M., & Abrami, P. C. (2014). A meta-analysis of blended learning and technology use in higher education: From the general to the applied. Journal of Computing in Higher Education, 26(1), 87-122. http://dx.doi.org/10.1007/s12528-013-9077-3

Means, B., Toyama, Y., Murphy, R. F., & Baki, M. (2013). The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record, 115(3), 1-47. Available http://www.tcrecord.org/library/content.asp?contentid = 16882

Moskal, P., Dziuban, C., & Hartman, J. (2013). Blended learning: A dangerous idea? Internet & Higher Education, 18, 15 -23. http://dx.doi.org/10.1016/j.iheduc.2012.12.001

Owston, R. D., & York, D (2017). The nagging question when designing blended courses: How much time should be devoted to online activities? Manuscript submitted for publication.

Owston, R. D., York, D., & Murtha, S. (2013). Student perceptions and achievement in a university blended learning strategic initiative. Internet and Higher Education, 18, 38–46. http://dx.doi.org/10.1016/j.iheduc.2012.12.003

Spanjers, I. A. E., Könings, K. D., Leppink, J., Verstegen, D. M. L., de Jong, N., Jeroen Katarzyna, C., J. G., & Merriënboer, V. (2015). The promised land of blended learning: Quizzes as a moderator. Educational Research Review, 15, 59-74. http://dx.doi.org/10.1016/j.edurev.2015.05.001

Vo, M. H., Zhu, C., & Diep, A. N. (2017). The effect of blended learning on student performance at course-level in higher education: A meta-analysis. Studies in Educational Evaluation, 53, 17-28. http://dx.doi.org/10.1016/j.stueduc.2017.01.002

Zhao, Y., Lei, J., Yan, B., Lai, C., & Tan, H. S. (2005). What makes the difference? A practical analysis of research on the effectiveness of distance education. Teachers College Record, 107(8), 1836–1884.

 

Session Type: 
Education Session