Watch 📺 Fiona Fidler gives RIOT Science talk on all things repliCATS
Earlier this month, Fiona Fidler gave a talk about the repliCATS project, and some of our plans for the future of the project. If you missed it (including if you were asleep because it ran from 11am-12pm BST), you can re-watch it via this link (YouTube)!
"If we could predict replicability acurately, this would be really useful considering how labour-intensive actual replication studies themselves are"
If you missed @fidlerfm's talk on @replicats, don't fret. You can watch it (or rewatch it!) here:
📺https://t.co/f8XzKl3f7o pic.twitter.com/6wIK8RfhmN— RIOT Science Club (@riotscienceclub) July 27, 2020
Talk abstract
The repliCATS project evaluates published scientific research. As the acronym—Collaborative Assessments for Trustworthy Science—suggests, repliCATS is a group activity, centred around assessing the trustworthiness of research claims. Reviewers first make private individual assessments about a research claim—judging its comprehensibility, the prior plausibility of underlying effect, and its likely replicability. Reviewers then share their judgements and reasoning with group members, providing both new information and the opportunity for feedback and calibration. The group interrogates differences in opinion and explores counterfactuals. After discussion, there is a final opportunity for privately updating individual judgements. Importantly the repliCATS process is not consensus-driven – reviewers can disagree, and their ratings and probability judgements are mathematically aggregated into a final assessment. At the moment, the repliCATS platform exists primarily to predict replicability. Launched in January 2019 as part of the DARPA SCORE program, over 18 months repliCATS elicited group assessments and captured associated reasoning and discussion, for 3,000 published social scientific research claims in 8 disciplines (Business, Criminology, Economics, Education, Political Science, Psychology, Public Administration, and Sociology). The repliCATS team are now working to extend the platform beyond merely predicting replicability, to deliver a more comprehensive peer review protocol. Suspected advantages of a repliCATS process over traditional peer review include: inbuilt training and calibration; feedback that is intrinsically rewarding; an inherently interactive process, but one which does not implicitly rely on ‘consensus by fatigue’; and a process that actively encourages interrogation. This talk will present some preliminary findings, and discuss the future of the platform.