Participating in our project means making judgements about the credibility of a published research claim. Read on to find out how you can get involved!
What does “participating” mean exactly?
In Phase 1, we assessed the replicability of 3000 published research articles and 100 COVID-19 pre-prints in the following disciplines: criminology, economics, education, marketing, management, psychology, public administration, and sociology. The full list of journals the claims are drawn from is listed here.
In Phase 2, we’ll be expanding the scope to examine the whole paper, and to answer questions that cover other signals of credibility, including transparency, robustness, replicability, validity and generalisability.
What won’t change is our approach. That is, we don’t ask you to do this alone. Our method (the IDEA protocol) involves structured group discussion – each claim is assessed by 3-5 other people, and you get to see what others in your group say before submitting your final judgement.
For phase 2, we’ll run a series of workshops starting in June 2021.
- Participants will be eligible for US$200 assessment grants.
- Express interest in participating in phase 2, and we’ll let you know when we open sign-ups for workshops.
- Registrations for the pre-SIPS repliCATS workshop in June are now open.
- To express interest for upcoming workshops, use this form https://melbourneuni.au1.qualtrics.com/jfe/form/SV_2lE8eMubIAAS2i1 (links out to qualtrics)
Why get involved?
In Phase 1 we achieved something extraordinary! We had over 550 participants from around the world contribute to evaluating the 3000 claims. Be a part the largest effort to evaluate reliability in the social & behavioural sciences!
You’ll also get to:
- improve your peer-review & error detection skills
- calibrate your judgements & reasoning against your peers
Express interest for phase 2 here: https://melbourneuni.au1.qualtrics.com/jfe/form/SV_2lE8eMubIAAS2i1
Who can assess claims? Every participant counts – don’t worry about being an expert, we need diverse views
Our method – the IDEA protocol – harnesses the power of structured group discussion in evaluating the credibility of published research. We have built a custom cloud-based platform to gather your evaluations. What we ask you to do is to evaluate the credibility of a claim, that is we ask you to read a paper and evaluate a set of credibility signals for that paper, including transparency, validity, robustness and replicability.
Part of the scope of the repliCATS project, and indeed the wider SCORE program is to examine the markers of expertise (e.g. education, experience, domain knowledge), and the role they may play in making good judgements about the likelihood a research claim will replicate.
This means an eligible research participant for our project is someone who has completed or is completing a relevant undergraduate degree, and is over 18 years of age. And, importantly, is interested in making judgements about replicability.
If you would like more information, you can:
- Watch this short video demo of the platform on our resources page.
- Check out what other participants have said about getting involved.
Just want to stay up-to-date on the project? Subscribe to our newsletter
We have a quarterly newsletter we send out about our project. By subscribing you’ll get a short, snappy newsletter letting you know what we’ve been up to, and what’s happening with the repliCATS project.
Privacy Collection Notice – the repliCATS project.
Human ethics application ID: 1853445.1
The information on this form is being collected by the repliCATS project, a research group at the University of Melbourne. You can contact us at repliCATSfirstname.lastname@example.org.
The information you provide will be used to communicate with you about the repliCATS project. The information will be used by authorised staff for the purpose for which it was collected, and will be protected against unauthorised access and use.