Home page
The repliCATS project aim is to crowdsource evaluations of the credibility of published research in eight social science fields: business research, criminology, economics, education, political science, psychology, public administration, and sociology.
In phase 2, the repliCATS project is about reimagining peer review as a structured deliberation process.
In 2020, we completed assessing the replicability of 3000 published claims from eight social & behavioural science fields. Our participants groups achieved 73% classification accuracy for replicated claims (or an AUC>0.7). This was phase 1.
In 2021, we will begin phase 2 of the SCORE program. In this next phase of research we will focus on evaluating a broader set of “credibility signals”—from transparency and replicability, to robustness and generalisability. Find out how you can get involved in our workshops: expressions of interest are now open!
➖
Why is it important to gather predictions about the credibility of published research?
Over the last decade several replication projects in the social and behavioural sciences have raised concerns over the reliability of the published scientific evidence base in those fields. Those replication efforts, which can include hundreds of researchers re-running entire experiments, are illuminating but highly resource intensive and difficult to scale.
Elicitation methods that result in accurate evaluations of the replicability—or more generally, credibility—of research can alleviate some of this burden, and help evaluate a larger proportion of the published evidence base. Once tested and calibrated, these elicitation methods could themselves be incorporated into peer review systems to improve evaluation before publication.
“If we can accurately predict credible research, our project could transform how end-users – from academics to policy makers – can assess the reliability of social scientific research.”
–– Prof Fiona Fidler, chief investigator of repliCATS project
The repliCATS project is part of a research program called SCORE, funded by DARPA, that eventually aims to build automated tools that can rapidly and reliably assign confidence scores to social science research claims.
Our approach
The “CATS” in repliCATS stands for Collaborative Assessment for Trustworthy Science.
The repliCATS project uses a structured iterative approach for gathering evaluations of the credibility of research claims. The method we use is called the IDEA protocol, and we have a custom-built cloud-based platform we use to gather data.
For each claim being evaluated, four or more participants in a group first Investigate the claim and provide an initial set of private judgements, together with qualitative reasons behind their judgments. Group members then Discuss, provide a second, private Estimate in light of discussion, and the repliCATS team Aggregates individual judgements using a diverse portfolio of mathematical methods, some incorporating characteristics of reasoning, engagement and uncertainty.
Phase 1 results
In Phase 1, which ran from Feb 2019 – November 2020, we had over 550 participants evaluate 3000 claims using our repliCATS platform, in a series of workshops and monthly remote assessment rounds. Another SCORE team, the Center for Open Science, independently coordinated direct replications and data analytic reproductions for a subset of the 3000 claims.
As of February 2021, results for 60 replication and reproductions have been reported by the Center for Open Science. Our top two performing aggregation methods achieved an AUC >0.75 or a classification accuracy of 73.77%. As results come through, we will continue to update this figure.
In September 2020, we also ran two week-long assessment workshops assessing 100 COVID-19 pre-prints. Each pre-print was independently assessed by three groups of varying experience. Once replication outcomes for these claims are available, we can conduct informative cross group comparisons, and explore differences in the accuracy of elicited predictions. We will share these results when we can.
Phase 2 – expanding our focus to consider a suite of “credibility signals”
In Phase 1 our focus was on gathering judgements of replicability for a single published claim in a paper. This is going to change for phase 2.
In this next phase of research, we will be evaluating two hundred papers in their entirety. In phase 1 terms that equates to roughly 2000 claims. When evaluating each paper, our IDEA groups will be asked to consider what we are calling a set of “credibility signals” which includes transparency, validity, robustness, replicability and generalisability.
We will be running a series of workshops in 2021 to evaluate these 200 papers, beginning in June 2021. To find out more and to express interest in workshops, visit our “Get Involved” page.
the repliCATS project news feed
-
First 2021 workshop announced!
repliCATS workshops will be back in June! We are kicking off our phase 2 research with a pre-SIPS repliCATS virtual workshop from 15-22 June 2021. What's new? In 2021, the repliCATS workshops will focus on evaluating the credibility of entire published research papers (or what we are calling "bushel papers") in the social and behavioural sciences literature. For the pre-SIPS workshop, the …
10 February, 2021 -
Protected: AIMOS2020 – repliCATS session, Fri 4 Dec @ 15.30 AEDT
Hi folks, we'll be running a session at AIMOS2020 on repliCATS phase 2: Beyond replication This workshop introduces the repliCATS platform for structured deliberation and evaluation of research articles. In small groups, we will work through an example research claim, evaluating its comprehensibility, prior plausibility and likely replicability. We will also use this workshop as an opportunity to introduce planned developments for …
1 December, 2020 -
SCORE program renewed for phase 2
We’re excited to announce that DARPA has renewed the SCORE program—and the repliCATS project—for a second phase! What does that mean? It means your predictions met accuracy thresholds! And we’ll return in 2021. In phase 2, we’ll evaluate a fresh set of claims for likely replicability as well as ask new questions, for example, about validity and generalisability. We’ll continue our work developing …
8 October, 2020 -
Watch 📺 Metascience should not be defined by its methods
Join our CI Fiona Fidler & philosopher of science Rachel Brown for a webinar on "Metascience should not be defined by its methods. Metascience, or metaresearch, is a field of research that has grown out of the replication crisis. Amongst other things, metascience evaluates and monitors open science initiatives and other interventions to improve scientific practices and cultures. It studies the …
4 September, 2020 -
Closed: Express interest in assessing COVID-19 research claims
The repliCATS project — and the wider SCORE program — has been expanded by DARPA to include 100 COVID-19 research claims from the social and behavioural sciences. Governments around the world may rely on social and behavioural science research to understand crises and inform policies. Inclusion of COVID claims will ensure that the repliCATS project is tested on a range of …
10 August, 2020 -
National Science Week: Assessing the reliability of COVID research – a repliCATS webinar
As part of National Science Week, join the repliCATS project team for a peak into how and why we're assessing COVID-19 claims. Responding to crises like the COVID-19 pandemic requires a strong evidence basis. But which scientific results are trustworthy? During this crisis there have been high profile cases of scientific misconduct and retractions and more general concerns about the quality …
4 August, 2020 -
Reimagining trust in science panel (19 Aug), feat Fiona Fidler
Recent crises have prompted discussion on ‘Trust in Science’. This raises the question: is science the kind of thing that can be trusted? Is ‘Trust in Science’ the same thing as trusting individual scientists or research organisations, or is it something else? How do we maintain ‘Trust in Science’ while holding a reasonable level of uncertainty about individual research results? An understanding …
4 August, 2020 -
Watch 📺 Fiona Fidler gives RIOT Science talk on all things repliCATS
Earlier this month, Fiona Fidler gave a talk about the repliCATS project, and some of our plans for the future of the project. If you missed it (including if you were asleep because it ran from 11am-12pm BST), you can re-watch it via this link (YouTube)! Talk abstract The repliCATS project evaluates published scientific research. As the acronym—Collaborative Assessments for Trustworthy Science—suggests, …
28 July, 2020
About us
The repliCATS project is led by Prof Fiona Fidler. We are a group of interdisciplinary researchers from the School of BioSciences, School of Historical and Philosophical Studies, and the Melbourne School of Engineering at the University of Melbourne, with collaboration from the Centre for Environmental Policy at Imperial College London.
To meet the team, check out “our team” page.
The repliCATS project team currently have a number of publications in progress and under review. As the pre-prints or final published versions of these papers become available, we will update them here.
–
List of papers:
- Fraser et al., “Predicting reliability through structured expert elicitation with repliCATS (Collaborative Assessments for Trustworthy Science)”
- Hanea et al., “Mathematically aggregating experts’ predictions of possible futures”
- Pearson et al (2021), “Eliciting group judgements about replicability: a technical implementation of the IDEA Protocol”
–
Predicting reliability through structured expert elicitation with repliCATS (Collaborative Assessments for Trustworthy Science)
Pre-print: https://osf.io/preprints/metaarxiv/2pczv/
Authors
Hannah Fraser Martin Bush Bonnie Wintle Fallon Mody Eden Smith Anca Hanea Elliot Gould Victoria Hemming Daniel Hamilton Libby Rumpff David Wilkinson Ross Pearson Felix Singleton Thorn raquel Ashton Aaron Willcox Charles Gray Andrew Head Melissa Ross Rebecca Groenewegen Alexandru Marcoci Ans Vercammen Timothy Parker Rink Hoekstra Shinichi Nakagawa David Mandel Don van Ravenzwaaij Marissa McBride Richard O. Sinnott Peter Vesk Mark Burgman Fiona Fidler
Abstract
Replication is a hallmark of scientific research. As replications of individual studies are resource intensive, techniques for predicting the replicability are required. We introduce a new technique to evaluating replicability, the repliCATS (Collaborative Assessments for Trustworthy Science) process, a structured expert elicitation approach based on the IDEA protocol. The repliCATS process is delivered through an underpinning online platform and applied to the evaluation of research claims in social and behavioural sciences. This process can be deployed for both rapid assessment of small numbers of claims, and assessment of high volumes of claims over an extended period. Pilot data suggests that the accuracy of the repliCATS process meets or exceeds that of other techniques used to predict replicability. An important advantage of the repliCATS process is that it collects qualitative data that has the potential to assist with problems like understanding the limits of generalizability of scientific claims. The repliCATS process has potential applications in alternative peer review and in the allocation of effort for replication studies.
–
Mathematically aggregating experts’ predictions of possible futures
Pre-print: https://osf.io/preprints/metaarxiv/rxmh7/
Authors
Anca Hanea David Wilkinson Marissa McBride Aidan Lyon Don van Ravenzwaaij Felix Singleton Thorn Charles Gray David Mandel Aaron Willcox Elliot Gould Eden Smith Fallon Mody Martin Bush Fiona Fidler Hannah Fraser Bonnie Wintle
Abstract
Experts are often asked to represent their uncertainty as a subjective probability. Structured protocols offer a transparent and systematic way to elicit and combine probability judgements from multiple experts. As part of this process, experts are asked to individually estimate a probability (e.g., of a future event) which needs to be combined/aggregated into a final group prediction. The experts’ judgements can be aggregated behaviourally (by striving for consensus), or mathematically (by using a mathematical rule to combine individual estimates). Mathematical rules (e.g., weighted linear combinations of judgments) provide an objective approach to aggregation. However, the choice of a rule is not straightforward, and the aggregated group probability judgement’s quality depends on it. The quality of an aggregation can be defined in terms of accuracy, calibration and informativeness. These measures can be used to compare different aggregation approaches and help decide on which aggregation produces the “best” final prediction. In the ideal case, individual experts’ performance (as probability assessors) is scored, these scores are translated into performance-based weights, and a performance-based weighted aggregation is used. When this is not possible though, several other aggregation methods, informed by measurable proxies for good performance, can be formulated and compared. We use several data sets to investigate the relative performance of multiple aggregation methods informed by previous experience and the available literature. Even though the accuracy, calibration, and informativeness of the majority of methods are very similar, two of the aggregation methods distinguish themselves as the best and worst.
–
Eliciting group judgements about replicability: a technical implementation of the IDEA Protocol
Link to PDF: http://hdl.handle.net/10125/70666 or https://scholarspace.manoa.hawaii.edu/handle/10125/70666
Citation: E. R. Pearson et al. “Eliciting group judgements about replicability: a technical implementation of the IDEA Protocol.” In Proceedings of the 54th Hawaii International Conference on System Sciences, (2021): 461-470.
Authors
E. Ross Pearson, Hannah Fraser, Martin Bush, Fallon Mody, Ivo Widjaja, Andy Head, David P. Wilkinson Bonnie Wintle, Richard Sinnott, Peter Vesk, Mark Burgman, Fiona Fidler
Abstract
In recent years there has been increased interest in replicating prior research. One of the biggest challenges to assessing replicability is the cost in resources and time that it takes to repeat studies. Thus there is an impetus to develop rapid elicitation protocols that can, in a practical manner, estimate the likelihood that research findings will successfully replicate. We employ a novel implementation of the IDEA (‘Investigate’, ‘Discuss’, ‘Estimate’ and ‘Aggregate) protocol, realised through the repliCATS platform. The repliCATS platform is designed to scalably elicit expert opinion about replicability of social and behavioural science research. The IDEA protocol provides a structured methodology for eliciting judgements and reasoning from groups. This paper describes the repliCATS platform as a multi-user cloud-based software platform featuring (1) a technical implementation of the IDEA protocol for eliciting expert opinion on research replicability, (2) capture of consent and demographic data, (3) on-line training on replication concepts, and (4) exporting of completed judgements. The platform has, to date, evaluated 3432 social and behavioural science research claims from 637 participants.
–
We’re an interdisciplinary team, based predominantly at the University of Melbourne, but we have colleagues from Germany, Netherlands, UK and the USA working on our project too. The repliCATS project is a part of Prof Fiona Fidler’s & Prof Simine Vazire’s joint research group, MetaMelb.
Meet the repliCATS project team
Fiona Fidler is a professor at the University of Melbourne, with a joint appointment in the Schools of BioSciences and History and Philosophy of Science. She is broadly interested in how experts, including scientists, make decisions and change their minds. Her past research has examined how methodological change occurs in different disciplines, including psychology, medicine and ecology, and developed methods for eliciting reliable expert judgements to improve decision making. She originally trained as a psychologist, and maintains a strong interest in psychological methods. She also has an abiding interest is statistical controversies, for example, the ongoing debate over Null Hypothesis Significance Testing. She is a current Australian Research Council Future Fellow, and leads the University of Melbourne’s Interdisciplinary MetaResearch Group (IMeRG), and the lead PI of the repliCATS project.
Bonnie Wintle is a research fellow in the School of Biosciences at the University of Melbourne, and a senior researcher in the Interdisciplinary MetaResearch Group (now MetaMelb). She develops structured methods for eliciting and aggregating quantitative and qualitative judgements from groups of experts, to support better decision and policy making. She has pioneered empirical research on the best ways to obtain more accurate group estimates of fact, and applied protocols for eliciting quantitative, probabilistic and qualitative judgements from expert groups that have informed real-world decisions (e.g., to underpin surveillance systems used in industry). She has a background in environmental science and ecology, a history of working closely with philosophers, mathematicians and psychologists, and extensive experience managing interdisciplinary expert groups. Bonnie is a PI on the repliCATS project, leading the Elicitation & aggregation team.
Hannah Fraser is a research fellow at the University of Melbourne working in Fiona Fidler’s meta-research lab, MetaMelb. She is lead author of Questionable Research Practices in Ecology and Evolution (Fraser et al. 2018), which has received widespread attention (preprint downloaded 679 times). During her PhD, Hannah also gained expert elicitation experience. In 2020, Hannah was president of the Association of Interdisciplinary Meta-research & Open Science, an association she helped found. Hannah was the research coordinator for the repliCATS project in Phase 1, and will be remaining on the project in phase 2 in an advisory capacity.
Mark Burgman is the editor of two books and the author of seven, including Risks and Decisions for Conservation and Environmental Management (Cambridge University Press, 2005) and Trusting judgements: How to get the best out of experts (Cambridge University Press, 2015). In addition, he has published over 250 refereed papers and more than 70 reviewed reports and commentaries. His book on Risks and Decisions outlined the foundations for a range of methods relevant to decision making under uncertainty and foreshadowed the importance of expert judgement and elicitation in empirical studies. In the 1990s, he one was one of the early figures in the development of methods for dealing with the human dimensions of environmental management. From 2006, at the University of Melbourne he led the Australian Centre of Excellence for Risk Analysis and then the Centre of Excellence for Biosecurity Risk Analysis. In 2016, he took up the position of Director of the Centre for Environmental Policy at Imperial College London. He has been the editor-in-chief of the journal Conservation Biology since 2013. The impact factor of his publications (Google Scholar) is 65 and his work has been cited more than 16,000 times. Mark is a PI on the repliCATS project.
Peter Vesk is an Associate Professor and Reader in the School of BioSciences at University of Melbourne. He has a long history of working on generalization and reliability of scientific knowledge before it was known as reproducibility, starting in plant ecology. He is an Associate Editor at Journal of Ecology, the most highly ranked journal in plant ecology. As a founding editor of Hot Topics in Ecology, designed to provide evidence based statements on ecological topics relevant to policy and management, he is keenly interested in participatory methods of providing reliable scientific knowledge. He has > 100 journal articles, with >4700 citations and H=36 (Scopus). Vesk’s research focus is gathering, formalizing and generalizing knowledge for management. This entails attention to methodology of data collection, use and model evaluation. Working on legacy and citizen science data have driven attention to robustness of inference and methodology. Pete is a PI on the repliCATS project.
Simine Vazire is a professor of psychology at the University of Melbourne, and a member of the Ethics & Wellbeing Hub. She studies meta-science and research methods/practices, as well as personality psychology and self-knowledge. Her research interests on the meta-science side include assessing the quality and integrity of scientific studies, the peer review process, and the scientific community at large. She is interested in how transparency and criticism are (or aren’t) used to make science more self-correcting. Her training is in social and personality psychology, and her interests in scientific practices and norms stems largely from her experiences in that field, particularly the so-called replication crisis. She has been an editor at several psychology journals, and co-founded the Society for the Improvement of Psychological Science (SIPS) with Brian Nosek in 2016. Simine will join as PI on the repliCATS project for phase 2.
Community & engagement team
Fallon Mody is a research fellow in Fiona Fidler’s meta-research group at the University of Melbourne. Her expertise is in science communication, qualitative analysis, and history and philosophy of science. Fallon has worked in science communication and qualitative research roles for the Faculty of Science and the Centre for Excellence for Biosecurity Risk Analysis (CEBRA) at the University of Melbourne; and the Royal College of Paediatrics & Child Health in London. Fallon will undertake the research engagement activity for this project, as well as assist in the qualitative analysis of expert reasoning that this project will undertake. Fallon’s PhD research was to develop and explore a prosopography of European medical migrants in mid-twentieth century Australia, using their lives to understand the ways in which local/national domains of medical practice develop and are sustained. Fallon will be leading the community & engagement team in Phase 2.
Mel Ross originally trained as a physiotherapist and has had a varied career working in hospital rehabilitation for over 17 years. More recently she has moved away from health care and has been working in Business Development and Sales. She joined the team in Phase 1 to assist with administration of the project, which includes helping with workshop coordination and communications for the project.
Data management & analysis team
David Wilkinson is a PhD student in the School of BioSciences at the University of Melbourne. David has a background in quantitative ecology. His PhD focuses on the computational, inferential, and predictive performance of joint species distribution models. David will be leading the data management & analysis team in Phase 2.
Aaron Wilkinson has spent the last three years shifting gears from a technical production industry, into psychological science degree. Transferring skills of project management into areas of research in data and neuroscience. Recently returning from studying abroad at Maastricht University in the Netherlands where, he also attend the Replicats and SIPS conference in Rotterdam. Aaron also works as a research assistant at Deakin University under Emma Sciberras. Aaron is an R acolyte and an open science advocate and will be continuing studies into fourth year and beyond.
Elliot Gould is a PhD student at the School of BioSciences, University of Melbourne, with a background in applied ecology. Elliot is investigating the transparency and reproducibility ecological models in conservation decision-making and ecological management.
Rebecca Groenewegen (bio & pic coming)
Rose O’Dea is joining repliCATS in Phase 2 to explore data on the comprehensibility of scientific claims. Previously she worked in the Evolution & Ecology Research Centre at UNSW as a behavioural ecologist, using zebrafish and meta-analyses to broadly study phenotypic variability. Rose is interested in how academic science could become more meaningful, and is a founding member of the Society for Open, Reliable, and Transparent Ecology and Evolutionary Biology.
Elicitation & aggregation team
This team is lead by Bonnie Wintle.
Anca Hanea is a Senior Research Fellow based at the Centre of Excellence for Biosecurity Risk Analysis (CEBRA) at the University of Melbourne. Her background is in mathematics and risk and environmental modelling. She has a PhD in Applied Probability from the Technical University of Delft (TU Delft). She was instrumental in building a COST European network for structured expert judgement elicitation and aggregation, and related standards for the European Food Safety Authority (EFSA).
Libby Rumpff is Deputy Director of the Centre for Economic and Environmental Research at the University of Melbourne. Her work focuses on applying participatory approaches to different decision-making contexts. She brings together skills in decision theory, risk assessment, expert elicitation, facilitation, and model development. She is a highly experienced facilitator, and will guide workshop design on the current project.
Reasoning team
Martin Bush is a research fellow in the School of Historical and Philosophical Studies at the University of Melbourne with expertise in the cultural history of popular science and professional experience in science communication and the museum sector. Particular interests include planetariums, public reasoning practices, the science communication work of the Ngarrindjeri Australian David Unaipon and popular astronomy in Australia in the era of the lantern slide. His recent PhD from Swinburne University is on popular astronomy in Australia in the era of the lantern slide and his essay from the thesis on the Proctor-Parkes affair was a joint winner of the 2016 Mike Smith Student Prize for History of Australian Science. Martin leads the reasoning team.
Alex Marcocci is a Teaching Assistant Professor of Philosophy at the University of North Carolina at Chapel Hill, a core faculty member in the UNC-Duke Philosophy, Politics and Economics Program and a Visiting Researcher in the Centre for Environmental Policy at Imperial College London. He works at the intersection of formal and applied issues in rationality, decision theory and public policy. For Alex’s full bio, visit: http://personal.lse.ac.uk/marcoci/
Eden Smith is a research fellow in Fiona Fidler’s meta-research group at the University of Melbourne. In this project, Eden will focus on investigating the reasoning involved in expert assessments of the replicability, reproducibility, and robustness of scientific claims, as well how concepts such as replicability are used within open-science communities. Eden is also collaborating on a digital-ethnography project exploring the sociotechnical dynamics involved in the open-source development of decentralised technologies by distributed communities. These projects build on Eden’s PhD (2018) research on the historical interdependence of two scientific concepts and their current uses as independent tools in neuroscience experiments.
Technical team
Ross Pearson is a digital supply chain transformation leader that has been the technical lead for telecommunications and mining transformations. As a delivery specialist, Ross ensures that large projects and transformation implementations are realised. In addition to his supply chain experience, Ross has worked on numerous University research projects. In 2019, Ross completed an honours in Computer Science with a focus on Artificial Intelligence, and in 2020 will begin a PhD at Monash University. Ross is the technical liaison manager for the repliCATS project.
Fazil Hassan (bio & pic coming)
The team who develop the repliCATS platform are led by Professor Richard Sinnott, and are part of the University of Melbourne’s eResearch Group: https://www.eresearch.unimelb.edu.au
Research support & admin team
Andy Head is a research assistant within IMeRG at the University of Melbourne. He has recently completed a Graduate Diploma of Psychology at Deakin University and is intending to commence a PhD in 2020. Andrew’s research interests include the history and philosophy of science, improving science practices, and improving the quality of public engagement with science.
Cassie Watts has ten years experience as a business manager at the University of Melbourne and joins the repliCATS project team as Finance Manager with a wealth of experience managing small and large research grants and consultancies.
Daniel Hamilton originally trained as a radiation therapist at Epworth hospital in Melbourne, working both clinically and in a research support role between 2012 and 2017. Following his position at Epworth hospital Daniel worked as a research coordinator at Peter MacCallum Cancer Centre managing a large portfolio of national and international radiation oncology clinical trials. He is the lead author on multiple papers investigating novel radiotherapy treatment techniques for prostate and breast cancer, as well as papers examining ethical issues in scientific publishing. Currently he is completing a PhD investigating the quality and integrity of published radiation oncology and medical physics research within A/Prof Fiona Fidler’s Interdisciplinary Meta-Research Group (IMeRG) at the University of Melbourne.
repliCATS alum
Aidan Lyon is CEO and co-founder of DelphiCloud and Research Associate in the Institute for Logic, Language and Computation at the University of Amsterdam. He completed his PhD at the Australian National University on the philosophical foundations of probability and has degrees in mathematics and philosophy from the University of Queensland. He has held academic positions at the University of Maryland, the Munich Center for Mathematical Philosophy, the Tilburg Center for Ethics and Philosophy of Science, the University of Vienna, the University of Melbourne, the University of Sydney, and the Australian National University. In addition to being an academic, he has operated as a risk management consultant for the Australian Government and other clients since 2011. Aidan’s research is primarily on the philosophical foundations of uncertainty, philosophical psychology, and social epistemology — with a particular focus on the so-called wisdom of crowds.
David Mandel (bio coming soon).
Felix Singleton Thorn is a PhD student in the School of Psychological Sciences, University of Melbourne, with a background in quantative psychology and research methods. Felix’s research examines how people plan, report and interpret the results of experiments.
Mathew Goodwin is a founding and key faculty member of a new doctoral program in Personal Health Informatics (PHI) and Director of the Computational Behavioral Science Laboratory (CBSL) at Northeastern University. He is also a visiting associate professor in the Department of Biomedical Informatics at Harvard Medical School (2018-2020), the former director of Clinical Research at the MIT Media Lab (2008-2011), and adjunct associate research scientist in the Department of Psychiatry & Human Behavior at Brown University. Mathew has 20 years of research and clinical experience working with children and adults on the autism spectrum and developing and evaluating innovative technologies for behavioral assessment and intervention, including naturalistic video and audio capture, telemetric physiological monitors, wireless accelerometry sensors, and digital video/facial recognition systems.
Nicholas Dempsey is a graphics designer, and he has designed all the badges participants are awarded on our research platform. Nick graduated from a Digital Media Design degree awarded at Swinburne University in 2019. Nick’s interests in design revolve around visual communication, typography, motion graphics, video and working with brands. In his spare time, he is an avid collector of vinyl records and loves photography and technology.
Raquel Ashton is a qualified wildlife veterinarian, expert elicitor and shadow editor for the journal, Biological Conservation. She is currently monitoring the health of repliCATS as the IDEA workflow co-ordinator.
Victoria Hemming was our first local workshop coordinator, and was instrumental in running our first workshop in Rotterdam where we assessed 575 claims. Victoria is currently completing a postdoc in Canada. While working on her PHD, she was a Research Associate at the Centre of Environmental and Economic Research (CEER) at the University of Melbourne, with 10 years’ experience as a consultant and project manager. She finished her PhD in structured expert judgement and decision making.
Participating in our project means making judgements about the credibility of a published research claim. Read on to find out how you can get involved!
–
What does “participating” mean exactly?
In Phase 1, we assessed the replicability of 3000 published research articles and 100 COVID-19 pre-prints in the following disciplines: criminology, economics, education, marketing, management, psychology, public administration, and sociology. The full list of journals the claims are drawn from is listed here.
In Phase 2, we’ll be expanding the scope to examine the whole paper, and to answer questions that cover other signals of credibility, including transparency, robustness, replicability, validity and generalisability.
What won’t change is our approach. That is, we don’t ask you to do this alone. Our method (the IDEA protocol) involves structured group discussion – each claim is assessed by 3-5 other people, and you get to see what others in your group say before submitting your final judgement.
For phase 2, we’ll run a series of workshops starting in June 2021.
- Participants will be eligible for US$200 assessment grants.
- Express interest in participating in phase 2, and we’ll let you know when we open sign-ups for workshops.
- Registrations for the pre-SIPS repliCATS workshop in June are now open.
- To express interest for upcoming workshops, use this form https://melbourneuni.au1.qualtrics.com/jfe/form/SV_2lE8eMubIAAS2i1 (links out to qualtrics)
–
Why get involved?
In Phase 1 we achieved something extraordinary! We had over 550 participants from around the world contribute to evaluating the 3000 claims. Be a part the largest effort to evaluate reliability in the social & behavioural sciences!
You’ll also get to:
- improve your peer-review & error detection skills
- calibrate your judgements & reasoning against your peers
Express interest for phase 2 here: https://melbourneuni.au1.qualtrics.com/jfe/form/SV_2lE8eMubIAAS2i1
–
Who can assess claims? Every participant counts – don’t worry about being an expert, we need diverse views
Our method – the IDEA protocol – harnesses the power of structured group discussion in evaluating the credibility of published research. We have built a custom cloud-based platform to gather your evaluations. What we ask you to do is to evaluate the credibility of a claim, that is we ask you to read a paper and evaluate a set of credibility signals for that paper, including transparency, validity, robustness and replicability.
Part of the scope of the repliCATS project, and indeed the wider SCORE program is to examine the markers of expertise (e.g. education, experience, domain knowledge), and the role they may play in making good judgements about the likelihood a research claim will replicate.
This means an eligible research participant for our project is someone who has completed or is completing a relevant undergraduate degree, and is over 18 years of age. And, importantly, is interested in making judgements about replicability.
If you would like more information, you can:
- Watch this short video demo of the platform on our resources page.
- Check out what other participants have said about getting involved.
–
Just want to stay up-to-date on the project? Subscribe to our newsletter
We have a quarterly newsletter we send out about our project. By subscribing you’ll get a short, snappy newsletter letting you know what we’ve been up to, and what’s happening with the repliCATS project.
You just sign-up using this form (we won’t spam you)
Privacy Collection Notice – the repliCATS project.
Human ethics application ID: 1853445.1
The information on this form is being collected by the repliCATS project, a research group at the University of Melbourne. You can contact us at repliCATS-contact@unimelb.edu.au.
The information you provide will be used to communicate with you about the repliCATS project. The information will be used by authorised staff for the purpose for which it was collected, and will be protected against unauthorised access and use.
You may access any personal information you have provided to the University by contacting us at repliCATS-contact@unimelb.edu.au. The University of Melbourne is committed to protecting personal information provided by you in accordance with the Privacy and Data Protection Act 2014 (Vic). All information collected by the University is governed by the University’s Privacy Policy. For further information about how the University deals with personal information, please refer to the University’s Privacy Policy or contact the University’s Privacy Officer at privacy-officer@unimelb.edu.auhttps://twitter.com/bg_farrar/status/1147551209173262336?s=20
One of the most frequently asked questions is, “Do I need to be an expert in any individual field to participate in the repliCATS project?” The answer is: No! To participate, you need to be 18 years or older, have completed or be completing an undergraduate degree, and be interested in evaluating research claims in scope for our project.
–
Below are blocks of frequently asked questions which may help you understand our project and evaluating claims better:
- What claims are being assessed? How are they chosen?
- Help answering claims
- Troubleshooting the platform
- $1000 monthly prizes
- General questions about the repliCATS project & SCORE
If you still can’t find the answer to your question, you can contact us at repliCATS-contact@unimelb.edu.au.’
–
About SCORE claims
- What is a research claim or claim?
In this project we use the word "research claim" or "claim" in a very specific way.
A research claim is a single major finding from a published study (for example, a journal article), as well as details of the methods and results that support this finding. A research claim is not equivalent to an entire article. Sometimes the claim as described in the abstract does not exactly match the claim that is tested. In this case, you should consider the research claim to be that which is described in the inferential test, as the next stage of SCORE will focus on testing the replicability of the test results only.
- How are the 3000 claims chosen?
The Center for Open Science (USA) are selecting the 3,000 research claims, as a subset of a larger set of 30,000 published papers in the social and behavioural sciences that are in scope for the SCORE program. These are:
- criminology
- economics
- education
- political science
- psychology
- public administration
- marketing, and
- sociology.
These claims will be drawn from the following journals.
Criminology
Marketing/Organisational Behaviour
- Criminology
- Law and Human Behavior
- Journal of Consumer Research
- Journal of Marketing
- Journal of Marketing Research
- Journal of Organizational Behavior
- Journal of the Academy of Marketing Science
- Organizational Behavior and Human Decision Processes
Economics
Political Science
- American Economic Journal: Applied Economics
- American Economic Revie
- Econometrica
- Experimental Economics
- Journal of Finance
- Journal of Financial Economics
- Journal of Labor Economics
- Quarterly Journal of Economics
- Review of Financial Studies
- American Political Science Review
- British Journal of Political Science
- Comparative Political Studies
- Journal of Conflict Resolution
- Journal of Experimental Political Science
- Journal of Political Economy
- World Development
- World Politics
Education
Psychology
- American Educational Research Journal
- Computers and Education
- Contemporary Educational Psychology
- Educational Researcher
- Exceptional Children
- Journal of Educational Psychology
- Learning and Instruction
- Child Development
- Clinical Psychological Science
- Cognition
- European Journal of Personality
- Evolution and Human Behavior
- Journal of Applied Psychology
- Journal of Consulting and Clinical Psychology
- Journal of Environmental Psychology
- Journal of Experimental Psychology: General
- Journal of Experimental Social Psychology
- Journal of Personality and Social Psychology
- Psychological Science
Health related
Public Administration
- Health Psychology
- Psychological Medicine
- Social Science and Medicine
- Journal of Public Administration Research and Theory
- Public Administration Review
Management
Sociology
- Academy of Management Journal
- Journal of Business Research
- Journal of Management
- Leadership Quarterly
- Management Science
- Organization Science
- American Journal of Sociology
- American Sociological Review
- Demography
- European Sociological Review
- Journal of Marriage and Family
- Social Forces
- From which journals are the 3000 claims being chosen?
The Center for Open Science (USA) are selecting the 3,000 research claims from the following journals.
Criminology
Marketing/Organisational Behaviour
- Criminology
- Law and Human Behavior
- Journal of Consumer Research
- Journal of Marketing
- Journal of Marketing Research
- Journal of Organizational Behavior
- Journal of the Academy of Marketing Science
- Organizational Behavior and Human Decision Processes
Economics
Political Science
- American Economic Journal: Applied Economics
- American Economic Revie
- Econometrica
- Experimental Economics
- Journal of Finance
- Journal of Financial Economics
- Journal of Labor Economics
- Quarterly Journal of Economics
- Review of Financial Studies
- American Political Science Review
- British Journal of Political Science
- Comparative Political Studies
- Journal of Conflict Resolution
- Journal of Experimental Political Science
- Journal of Political Economy
- World Development
- World Politics
Education
Psychology
- American Educational Research Journal
- Computers and Education
- Contemporary Educational Psychology
- Educational Researcher
- Exceptional Children
- Journal of Educational Psychology
- Learning and Instruction
- Child Development
- Clinical Psychological Science
- Cognition
- European Journal of Personality
- Evolution and Human Behavior
- Journal of Applied Psychology
- Journal of Consulting and Clinical Psychology
- Journal of Environmental Psychology
- Journal of Experimental Psychology: General
- Journal of Experimental Social Psychology
- Journal of Personality and Social Psychology
- Psychological Science
Health related
Public Administration
- Health Psychology
- Psychological Medicine
- Social Science and Medicine
- Journal of Public Administration Research and Theory
- Public Administration Review
Management
Sociology
- Academy of Management Journal
- Journal of Business Research
- Journal of Management
- Leadership Quarterly
- Management Science
- Organization Science
- American Journal of Sociology
- American Sociological Review
- Demography
- European Sociological Review
- Journal of Marriage and Family
- Social Forces
Answering claims – help please!
- I want to assess claims. What do I need to do?
Great! You can create an account and log on to our platform by visiting: https://score.eresearch.unimelb.edu.au
The first step when you create an account will be a short survey which includes a plain language statement, obtaining your consent, and some demographic information about you.
You might also find the following pages useful to:
- I'm on the platform, how do I assess a claim?
There is a load of information we've prepared to help you navigate the website, as well as get comfortable about answering claims.
Check out the resources page for videos, handy guides, and a whole bunch of additional information.
- How long should I spend evaluating a claim?
You can spend as much time as you want, however, we suggest that all up you shouldn't spend more than 30 minutes per claim, for both round one and round two.
During workshops, we use a model of 10 minutes for round one, 15 minutes for discussion and 5 minutes to update and submit your round two.
For virtual groups, the discussion is via commenting and up or down votes. So, if you are working solo or completely virtually (i.e. no real time discussion), we suggest that you spend around 10-15 minutes to complete round one, which includes perusing the paper (there is a link in the lefthand side panel), and spending a bit of extra time to writing down your reasoning. This will help you and the other participants for round two.
- There seems to be multiple claims in the paper. Which one do I evaluate?
Sometimes the claim text (in bold) indicates a claim different from that reported in the inferential test results. In this case, all your answers should relate to the inferential test results.
Also, some papers have very few claims and deploy very few tests, others have dozens or hundreds – evaluating ‘all the claims made’ would be incredibly unwieldy. Remember: you only need to evaluate the central claim listed in the claim panel on the right sidebar.
- Do you have guides or any resources I can access to help me answer claims?
- Do I need to read the whole paper that is linked to the claim?
Only if you feel like it. We think it is a good idea to look at the paper, and read as much of the paper as is sufficient to help you evaluate the replicability of the central claim presented in the platform.
- How do I approach answering the replicability question?
For each claim you evaluate we ask you to estimate the probability that direct replications of this study would find a statistically significant effect in the same direction as the original claim (0-100%). 0 means that you think that a direct replication would never succeed, even by chance. 100 means that you think that a direct replication would never fail, even by chance.
To answer this question, imagine 100 replications of the original study, combined to produce a single, overall replication estimate (e.g., a meta-analysis with no publication bias). How likely is it that the overall estimate will be similar to the original? Note that all replication studies are ‘direct’ replications, i.e., they constitute reasonable tests of the original claim, despite minor changes that may have occurred in methods or procedure. And all replication studies have high power (90% power to detect an effect 50-75% of the original effect size with alpha=0.05, two-sided).
In the text box, we also ask you to note what factors influenced your judgement about whether the claim would successfully replicate, or not. For each of the following, list some factors (dot points are fine):
- For your lower bound, think of factors that make successful replications unlikely
- For your upper bound, think of factors that make successful replications likely.
- For your best estimate, consider the balance of factors.
How will this research claim be replicated?
We cannot answer this question precisely. The selection and replication of claims for the SCORE program is being overseen by the Centre for Open Science, independently of the repliCATS project. See here for more details about this part of the SCORE program.
For SCORE, the intent of a direct replication is to follow the methods of the original study with a high degree of similarity, varying aspects only where there is a high degree of confidence that they are not relevant to the research claim being investigated. However, it is generally impossible to follow a study precisely, and the question as to which aspects matter is a judgement call.
Our best advice is to imagine what kinds of decisions you would face if you were asked to replicate this research claim, and then to consider the effects of making different choices for these decisions. This is one reason why we ask you to consider a set of 100 replications when making your assessment – even though they are 100 direct replications, each might be slightly different. You should consider the effect of these slight variations when making your estimate.
In some instances, a replication may not be able to collect new data, for example, if the claim relates to a specific historical event, like an election. In this case you should consider the different choices that could be made in analysing the data. Again, you should consider the effect of slight variations in these choices when making your estimate of replicability.
- I don't understand a term on the platform. Is there a glossary?
Yes there is, click here repliCATS glossary.
If you think there's a term missing or defined incorrectly, send us an e-mail to: repliCATS-contact@unimelb.edu.au
- How will a given research claim be replicated?
We cannot answer this question precisely. The selection and replication of claims for the SCORE program is being overseen by the Centre for Open Science, independently of the repliCATS project. See here for more details about this part of the SCORE program.
For SCORE, the intent of a direct replication is to follow the methods of the original study with a high degree of similarity, varying aspects only where there is a high degree of confidence that they are not relevant to the research claim being investigated. However, it is generally impossible to follow a study precisely, and the question as to which aspects matter is a judgement call.
Our best advice is to imagine what kinds of decisions you would face if you were asked to replicate this research claim, and then to consider the effects of making different choices for these decisions. This is one reason why we ask you to consider a set of 100 replications when making your assessment – even though they are 100 direct replications, each might be slightly different. You should consider the effect of these slight variations when making your estimate.
In some instances, a replication may not be able to collect new data, for example, if the claim relates to a specific historical event, like an election. In this case you should consider the different choices that could be made in analysing the data. Again, you should consider the effect of slight variations in these choices when making your estimate of replicability.
Platform troubleshooting
- Can I use a tablet or handheld devide?
No, sorry! The online platform works best on a laptop or PC.
- My browser doesn't seem to be working
We built the platform to be most compatible with Google Chrome. Safari seems to misbehave.
- Do I need to save as I go?
Only if you want to. All responses have a save functionality at the question-level. It allows you to save as you go. This controls against losing information should your browser crash, or if you want to think about it/return to the question before you submit it to us.
$1000 monthly prizes
- What are the rules for prizes?
- When does each month/round begin?
We release new claims on the 17th of every month, starting from January 2020 to 17 June 2020.
- Who are the past winners?
For a list of winners, see: https://replicats.research.unimelb.edu.au/2019/12/09/replicats-monthly-prizes-and-rules
About the project & SCORE program
- What is “replication” as defined for this project?
Replication, along with many other related terms like reproducibility, are contested. That is, they have multiple meanings.
For this project, our working definition of a direct replication is a replication that follows the methods of the original study with a high degree of similarity, varying aspects only where there is a high degree of confidence that they are not relevant to the research claim. The aim of a direct replication is to improve confidence in the reliability and validity of an experimental finding by starting to account for things such as sampling error, measurement artefacts, and questionable research practices.
- Does repliCATS stand for something?
Yes. The “CATS” in repliCATS is an acronym for Collaborative Assessment for Trustworthy Science.
- Who is part of your research team?
We are an interdisciplinary research team based predominantly at the University of Melbourne. You can meet the research team here.
- What are the aims of the repliCATS project?
We are developing and testing methods to elicit accurate predictions about the likely replicability of published research claims in the social sciences. As you may be aware, some large scale, crowdsourced replication projects have alerted us to the possibility that replication success rates may be lower than we once thought. Our project will assist with the development of efficient methods for critically evaluating the evidence base of social science research.
- What is the IDEA protocol?
The IDEA protocol is a structured protocol for eliciting expert judgments based on the Delphi process. IDEA stands for Investigate, Discuss, Estimate, Aggregate.
Applying the IDEA protocol involves recruiting a diverse group of experts to answer questions with probabilistic or quantitative responses. Experts first investigate the questions and clarify meanings of terms, reducing variation caused by linguistic ambiguity. They provide their private, individual estimate, using a 3- or 4-step method (highest, lowest, best guess). The group’s private estimates are revealed; group members can then see how their estimates sit in relation to others. The group discusses the results, shares information and cross-examines reasoning and evidence. Group members individually provide a second and final private estimate. These second-round estimates are then combined using mathematical aggregation.
The strengths of the IDEA protocol in eliciting predictions of the likely replicability of research claims lies in the stepped, structured nature of the approach. The feedback and discussion components of the IDEA protocol both function to reduce overconfidence in estimates, which is a known limitation of expert elicitation methods. The discussion component of the IDEA protocol also allows experts to account for private information which could substantially alter the likely replicability assessment of a research claim.
This protocol, developed at the University of Melbourne, has been found to improve judgements under uncertainty. IDEA stands for “Investigate”, “Discuss”, “Estimate” and “Aggregate”, the four steps in the process of this elicitation.
More information on the IDEA protocol can be found here (external link to: Methods Blog).
- Can I participate in this project?
Yes! We hope to collect judgements from a diverse range of participants in the following broad disciplines:
- business research
- criminology
- economics
- education
- political science
- psychology
- public administration
- marketing, and
- sociology.
If you are interested in participating, find out more about participating and signing upGet involved, or contact us at repliCATS-project@unimelb.edu.au to ask us for more information.
- If I participate, what’s in it for me?
Your participation will help us to refine methods for predicting the replicability of social and behavioural science claims. Any data we collect could drastically change the way we think about published research evidence. For individuals participants, it also provides the opportunity to develop your skills, through peer interactions, and to become more critical consumers of the research literature.
Our first workshop was held in July 2019 in Rotterdam, with over 200 participants over two days. Our participants reported that they found the experience valuable and enjoyed thinking about replicability of published research evidence. Additionally, early career researchers said participating in the workshop improved their critical appraisal (or peer review) skills, and they enjoyed comparing their judgements against diverse individuals (from discipline to career stage) in their group.
- How are the 3,000 research claims chosen?
The Center for Open Science (USA) are selecting the 3,000 research claims, as a subset of a larger set of 30,000 published papers in the social and behavioural sciences that are in scope for the SCORE program. These are:
- criminology
- economics
- education
- political science
- psychology
- public administration
- marketing, and
- sociology.
These claims will be drawn from the following journals.
Criminology
Marketing/Organisational Behaviour
- Criminology
- Law and Human Behavior
- Journal of Consumer Research
- Journal of Marketing
- Journal of Marketing Research
- Journal of Organizational Behavior
- Journal of the Academy of Marketing Science
- Organizational Behavior and Human Decision Processes
Economics
Political Science
- American Economic Journal: Applied Economics
- American Economic Revie
- Econometrica
- Experimental Economics
- Journal of Finance
- Journal of Financial Economics
- Journal of Labor Economics
- Quarterly Journal of Economics
- Review of Financial Studies
- American Political Science Review
- British Journal of Political Science
- Comparative Political Studies
- Journal of Conflict Resolution
- Journal of Experimental Political Science
- Journal of Political Economy
- World Development
- World Politics
Education
Psychology
- American Educational Research Journal
- Computers and Education
- Contemporary Educational Psychology
- Educational Researcher
- Exceptional Children
- Journal of Educational Psychology
- Learning and Instruction
- Child Development
- Clinical Psychological Science
- Cognition
- European Journal of Personality
- Evolution and Human Behavior
- Journal of Applied Psychology
- Journal of Consulting and Clinical Psychology
- Journal of Environmental Psychology
- Journal of Experimental Psychology: General
- Journal of Experimental Social Psychology
- Journal of Personality and Social Psychology
- Psychological Science
Health related
Public Administration
- Health Psychology
- Psychological Medicine
- Social Science and Medicine
- Journal of Public Administration Research and Theory
- Public Administration Review
Management
Sociology
- Academy of Management Journal
- Journal of Business Research
- Journal of Management
- Leadership Quarterly
- Management Science
- Organization Science
- American Journal of Sociology
- American Sociological Review
- Demography
- European Sociological Review
- Journal of Marriage and Family
- Social Forces
- From which journals are the 3,000 research claims chosen?
The Center for Open Science (USA) are selecting the 3,000 research claims from the following journals.
Criminology
Marketing/Organisational Behaviour
- Criminology
- Law and Human Behavior
- Journal of Consumer Research
- Journal of Marketing
- Journal of Marketing Research
- Journal of Organizational Behavior
- Journal of the Academy of Marketing Science
- Organizational Behavior and Human Decision Processes
Economics
Political Science
- American Economic Journal: Applied Economics
- American Economic Revie
- Econometrica
- Experimental Economics
- Journal of Finance
- Journal of Financial Economics
- Journal of Labor Economics
- Quarterly Journal of Economics
- Review of Financial Studies
- American Political Science Review
- British Journal of Political Science
- Comparative Political Studies
- Journal of Conflict Resolution
- Journal of Experimental Political Science
- Journal of Political Economy
- World Development
- World Politics
Education
Psychology
- American Educational Research Journal
- Computers and Education
- Contemporary Educational Psychology
- Educational Researcher
- Exceptional Children
- Journal of Educational Psychology
- Learning and Instruction
- Child Development
- Clinical Psychological Science
- Cognition
- European Journal of Personality
- Evolution and Human Behavior
- Journal of Applied Psychology
- Journal of Consulting and Clinical Psychology
- Journal of Environmental Psychology
- Journal of Experimental Psychology: General
- Journal of Experimental Social Psychology
- Journal of Personality and Social Psychology
- Psychological Science
Health related
Public Administration
- Health Psychology
- Psychological Medicine
- Social Science and Medicine
- Journal of Public Administration Research and Theory
- Public Administration Review
Management
Sociology
- Academy of Management Journal
- Journal of Business Research
- Journal of Management
- Leadership Quarterly
- Management Science
- Organization Science
- American Journal of Sociology
- American Sociological Review
- Demography
- European Sociological Review
- Journal of Marriage and Family
- Social Forces
- How can I get more information about this project?
You can express interest in assessing claims, or subscribe to our mailing list.
You can also follow us on twitter, @replicats.
Or, you can send us an e-mail at repliCATS-contact@unimelb.edu.au.
-
First 2021 workshop announced!
repliCATS workshops will be back in June! We are kicking off our phase 2 research with a pre-SIPS repliCATS virtual workshop from 15-22 June 2021. What's new? In 2021, the repliCATS workshops will focus on evaluating the credibility of entire published research papers (or what we are calling "bushel papers") in the social and behavioural sciences literature. For the pre-SIPS workshop, the …
10 February, 2021 workshop, repli... -
Protected: AIMOS2020 – repliCATS session, Fri 4 Dec @ 15.30 AEDT
Hi folks, we'll be running a session at AIMOS2020 on repliCATS phase 2: Beyond replication This workshop introduces the repliCATS platform for structured deliberation and evaluation of research articles. In small groups, we will work through an example research claim, evaluating its comprehensibility, prior plausibility and likely replicability. We will also use this workshop as an opportunity to introduce planned developments for …
1 December, 2020 workshop, repli... -
SCORE program renewed for phase 2
We’re excited to announce that DARPA has renewed the SCORE program—and the repliCATS project—for a second phase! What does that mean? It means your predictions met accuracy thresholds! And we’ll return in 2021. In phase 2, we’ll evaluate a fresh set of claims for likely replicability as well as ask new questions, for example, about validity and generalisability. We’ll continue our work developing …
8 October, 2020 Phase 2, Uncate... -
Protected: COVID-19 virtual workshop: closing ceremony video now up! 🔒
Hi folks, you can access this page because you’re part of the repliCATS COVID-19 virtual workshop from 2-8 September! Thanks for all your work over the last week! Here's a link to the closing ceremony video https://cloudstor.aarnet.edu.au/plus/s/2trpwE15e5uwbp9 & it is also uploaded on slack. Here are links to all the information you'll lead leading up to and during the week: About the virtual workshop …
8 September, 2020 workshop, repli... -
Watch 📺 Metascience should not be defined by its methods
Join our CI Fiona Fidler & philosopher of science Rachel Brown for a webinar on "Metascience should not be defined by its methods. Metascience, or metaresearch, is a field of research that has grown out of the replication crisis. Amongst other things, metascience evaluates and monitors open science initiatives and other interventions to improve scientific practices and cultures. It studies the …
4 September, 2020 metascience; me... -
Closed: Express interest in assessing COVID-19 research claims
The repliCATS project — and the wider SCORE program — has been expanded by DARPA to include 100 COVID-19 research claims from the social and behavioural sciences. Governments around the world may rely on social and behavioural science research to understand crises and inform policies. Inclusion of COVID claims will ensure that the repliCATS project is tested on a range of …
10 August, 2020 COVID-19, Media... -
National Science Week: Assessing the reliability of COVID research – a repliCATS webinar
As part of National Science Week, join the repliCATS project team for a peak into how and why we're assessing COVID-19 claims. Responding to crises like the COVID-19 pandemic requires a strong evidence basis. But which scientific results are trustworthy? During this crisis there have been high profile cases of scientific misconduct and retractions and more general concerns about the quality …
4 August, 2020 Profile, News, ... -
Reimagining trust in science panel (19 Aug), feat Fiona Fidler
Recent crises have prompted discussion on ‘Trust in Science’. This raises the question: is science the kind of thing that can be trusted? Is ‘Trust in Science’ the same thing as trusting individual scientists or research organisations, or is it something else? How do we maintain ‘Trust in Science’ while holding a reasonable level of uncertainty about individual research results? An understanding …
4 August, 2020 Profile, Event,... -
Monthly winners!
FINAL round winners: 1 - 30 July This was our final repliCATS remote round for phase 1. And at the end, we reached our goal of assessing 3000 claims! It was a phenomenal effort, with 1840 assessments made in 571 claims. This round returned to basics, awarding the three participants who assessed the most number of claims. Congratulations to: sugarglider_419, who assessed an …
4 August, 2020 Prizes, News, P... -
Hey PsyPAG2020 folks 👋🏻 have a great workshop!
If you fancy leaving us some feedback, we'd love to hear from you 🙂 You can e-mail repliCATS-contact@unimelb.edu.au or leave a reply using the reply function below (anonymously if you'd prefer).
31 July, 2020 workshop, psyPA... -
Watch 📺 Fiona Fidler gives RIOT Science talk on all things repliCATS
Earlier this month, Fiona Fidler gave a talk about the repliCATS project, and some of our plans for the future of the project. If you missed it (including if you were asleep because it ran from 11am-12pm BST), you can re-watch it via this link (YouTube)! Talk abstract The repliCATS project evaluates published scientific research. As the acronym—Collaborative Assessments for Trustworthy Science—suggests, …
28 July, 2020 Profile, News, ... -
💬 participant tweets we love
The @replicats project is phenomenal! I participated last week by assessing replicability of meta-analysis claims. Was fascinating to see how others reach judgements and learnt about interesting synthesis methods which I hadn't come across before https://t.co/SwJU9cSNDX — Matthew Page (@mjpages) July 28, 2020 me, trying to replicate the in-person @replicats experience: (sidenote: team box jelly #4 🏅🏆 wooo!!! congratulations, everyone! 🥳)#repliCATS2020 pic.twitter.com/Q44mWqL3sx — james …
28 June, 2020 Platform -
repliCATS monthly prizes and rules
Round 7 (final round!): 1 July - 30 July 2020 It is the final remote round for repliCATS in phase 1! So, we'll end how we started, by giving prizes to the three participants who assess the most number of total claims in July 2020, as follows: US$500 - for the participant who assessed the most number of claims US$250 - for the …
28 June, 2020 Prizes, repliCA... -
repliCATS workshop went virtual – over 550 claims assessed in the week
Our last phase 1 workshop was originally scheduled to run before the 2020 Society for Improving Psychological Sciences (SIPS) conference in Victoria, Canada. Then everything changed. The repliCATS team created our first virtual workshop designed to accomodate all the original workshop participants scattered around over 10 timezones. What we ended up with was repliCATS2020 workshop week that ran from 12-19 May, …
23 June, 2020 repliCATS, work... -
Platform outage on 13 May. If you assessed claims between 12-13 May, please read this.
Hi repliCATS participants, some routine system maintenance being undertaken on the repliCATS platform in the afternoon on 13 May (Australian time) resulted in our database being wiped from the live platform. We have identified the problem, and restored the platform shortly after, but we have irretrievably lost all of the assessments you submitted on the platform on 12 May and the …
13 May, 2020 Platform -
Update: repliCATS workshops on 26+30 Mar postponed
**Update#2 in light of covid-19 (25 Mar) - These workshops on Thursday 26 March & Monday 30 March have been postponed indefinitely. This will probably not come as a surprise to you in light of the continually evolving response to the COVID-19 pandemic, and closer to home, how university staff are needing to rapidly respond to teaching & research commitments in …
13 March, 2020 workshop, Event...