Research on learning technology efficacy lags far behind the explosive growth of available products. Rapid cycle technology evaluation is an emerging approach to generating timely evidence on the effectiveness of learning technologies for specific contexts and learners. Presenters will describe this approach and discuss possible applications in postsecondary education.
Faculty and administrators can choose from a growing number of educational technologies, ranging from comprehensive courseware to applications targeting specific student skills or content knowledge, to tools aimed at supporting faculty productivity and new teaching practices. Many of these technologies – especially newer ones – lack robust evidence to support their claims of effectiveness. And when evidence exists, it may not be relevant for your institution or your students. As a result, decisions often have to be made about technology adoption based on marketing materials, peer recommendations, and instinct.
Moreover, few institutions have supports in place to systematically evaluate the effectiveness of learning technologies. It is typically left to individual faculty members to try to gauge whether a new resource “worked” as intended, or whether different approaches to implementation might have produced better results. Faculty members and departmental committees may compare aggregate course outcomes before and after the introduction of a new technology, but these evaluations rarely control for students’ backgrounds or analyze the impacts on subgroups of students, such as underrepresented minorities or students with lower (or higher) incoming GPAs. As a result, decision makers may have a general sense of whether course outcomes rose or fell with the introduction of the technology, but not enough information to determine why or how best to move forward.
Rapid cycle technology evaluation is an emerging approach to generating timely evidence on the effectiveness of learning technologies -- along with strategies for implementing them -- for institution-specific context and students. This approach has been championed by researchers involved with the Brookings Institute’s Hamilton Project for a variety of social science research domains, including education. Rapid cycle evaluations have been embraced by the U.S. Department of Education as a way to address the dearth of rigorous evidence supporting learning technology products in K12 education.
The core concept of rapid cycle evaluation is to design tests that address narrow research questions and can be implemented over short time frames and at low cost. They aim to answer specific questions about the impact of a technology (or implementation approach) for a specific set of users and in a specific educational context. These evaluations are more likely to point to another set of questions than to provide conclusive answers, and can play a key role in continuous improvement processes.
Data for rapid cycle evaluations can come from administrative systems and course assessments. Usage logs from learning technologies themselves can also be a valuable source of data on outcomes, as well as for understanding how students and other users interact with the technology and what it reveals about growth, engagement, and learning behaviors.
How might these methods be used by postsecondary faculty, departments, and institutional researchers to inform decisions about technology adoption and enable systematic improvement in implementation? MJ Bishop and Rebecca Griffiths will describe the tools and models being developed through the Department of Education initiative and an effort sponsored by the Bill & Melinda Gates Foundation to develop tools that can be used in postsecondary institutions to evaluate courseware adoptions.
The presenters bring two different perspectives: Griffiths serves as SRI’s principal investigator for the Department of Education initiative and has worked with colleges and universities to conduct numerous effectiveness studies of learning technology. Bishop leads a prominent university system center for academic innovation. In this capacity she works closely with faculty and staff across a diverse system of public universities and colleges and has a deep understanding of prevailing practices for evaluating effectiveness of learning technology.
The presenters are eager to hear from audience members about the challenges various constituents face in evaluating technology products and how new tools and resources could help to close the gaps. They will spend a significant portion of the session asking audience members to share experiences and discussing ways that rapid cycle evaluation might be used to enable more evidence-based decision making on their campuses.
Some specific topics include:
- How are rapid cycle evaluations different from traditional efficacy studies?
- What are some applications in postsecondary education?
- Who within postsecondary education could take advantage of this approach?
- What kinds of analytical tools would be most useful to these constituents?
- How can system data from technology providers be used in rapid cycle evaluations?