Betting on Science: How to Identify the Scientific Evidence Worthy of Investment in Initiatives and Products

Audience Level: 
All
Special Session: 
Research
Leadership
Abstract: 

Learn to skeptically evaluate the science of “evidence based” practices and technologies.  “Bet” on studies in a stock exchange and let the market predict which evidence is likely to hold true and merit organizational investment in new initiatives and technologies.  Visit edu.tier1evidence.com to vote for the studies to be traded.

Extended Abstract: 

Much time, effort, and money can be saved with a knowing and skeptical evaluation of the scientific evidence used to support the development and investment in new “evidence-based” initiatives, programs, products, and technologies.  This session delivers the knowledge and skills necessary for stakeholders and researchers to identify the characteristics of strong and weak scientific evidence in the social sciences. It is appropriate for anyone who stands to benefit from making evidence-based decisions in an educational setting: including educators, administrators, researchers, data scientists, accountability professionals, as well as technology purchasers and developers. 

 

The foundation of science is replication.  But, in fact, any “evidence-based” school initiative or learning technology is a replication of the evidence upon which it is based.  The more closely the initiative or technology matches the methods and circumstances of the original work, the more likely it is to replicate those desirable original results.   This makes an evidence-based approach seem like a pretty good bet.  Indeed, non-evidence-based initiatives and technologies bear a mere 8% chance of success.   

 

As the only research method that can speak to causality, experiments / randomized control trials are the gold standard.  Yet, among the top tier of evidence there are better bets than others.  In 2012, a group of social scientists conducted a mega-study in which they attempted to replicate 100 social science experiments published in top peer-reviewed journals.  Over half of this top-tier evidence turned out to be fool’s gold that failed to show the same results the second time around.

 

In 2015, another group repeated this effort and replicated another 21 top-tier experiments.  This time, they also ran a “prediction market”--a stock exchange in which volunteer scientists could buy or sell “shares” in studies, based on how reproducible they seemed.   This time, 62% of the studies were successfully replicated. Meanwhile, the activity of the scientist-traders produced a market that predicted this outcome with uncanny precision (the market predicted 63% of the studies would replicate). 

 

Publishers, editors, peer-reviewers and scientists are motivated to publish research that others will want to read.  Yet outside the excitement, and trends that result, the prediction replication market showed that practitioners of science know the characteristics of good studies—and so can you.  

During the initial presentation, Dr. Talevich will discuss the characteristics of strong and weak scientific evidence to help you discern the difference.  She will discuss tiers of evidence, biases common in learning science and educational decision-making, as well as statistical tomfoolery such as p-hacking.  

 

Interactivity:  During the individual reflection period, session participants will be given several learning science studies to evaluate.  Then participants will trade these studies based on their own evaluation of how reproducible they think the findings will be in a prediction market trading session. 

 

Session Goals:  At a minimum, participants will emerge with the ability to evaluate the science upon which they are basing their “evidence based” organizational decisions.  Furthermore, prediction markets have been shown to be remarkably accurate in predicting the reproducibility of research. Thus, the result of our trading session has the potential to identify which learning science concepts currently popular in education are reliable enough to merit organizational investment—and which, trendy as they may be, are not. 

 

Before the conference, participants can vote on education-relevant social science studies (e.g. learning science, psychology) that they would like to trade in the market by visiting edu.tier1evidence.com.  During the talk, Participants will be given a secret code After the conference, participants will be able to view the results of their prediction market with the secret code.  

Conference Session: 
Concurrent Session 10
Conference Track: 
Research: Designs, Methods, and Findings
Session Type: 
Present and Reflect Session
Intended Audience: 
All Attendees