SCORE program renewed for phase 2

We’re excited to announce that DARPA has renewed the SCORE program—and the repliCATS project—for a second phase!

What does that mean? It means your predictions met accuracy thresholds! And we’ll return in 2021. In phase 2, we’ll evaluate a fresh set of claims for likely replicability as well as ask new questions, for example, about validity and generalisability. We’ll continue our work developing the repliCATS protocol for alternative rapid peer review, and peer review training using the opportunity for feedback & calibration that our protocol provides.
All of our achievements to date are attributable to the 550+ participants who have collaborated with us so far. We hope you’ll continue to join our workshops, and earn our badges, and keep us accountable.

For more information about the wider SCORE program, see:

Watch 📺 Metascience should not be defined by its methods

Join our CI Fiona Fidler & philosopher of science Rachel Brown for a webinar on "Metascience should not be defined by its methods.

Metascience, or metaresearch, is a field of research that has grown out of the replication crisis. Amongst other things, metascience evaluates and monitors open science initiatives and other interventions to improve scientific practices and cultures. It studies the impact of institutional reward structures, performance metrics, incentives, and methods of scientific resource allocation. It also evaluates peer review practices—both current, and alternative future proposals. As a field, it is (unfortunately) often defined as “the scientific study of science itself” or “the science of doing science”. In this talk I’ll explain why metascience shouldn’t be defined by appeal to a monolithic scientific method, and discuss other ways of conceptualizing what this new epistemic community does.

The webinar is now available to watch here:

Closed: Express interest in assessing COVID-19 research claims

The repliCATS project — and the wider SCORE program — has been expanded by DARPA to include 100 COVID-19 research claims from the social and behavioural sciences.

Governments around the world may rely on social and behavioural science research to understand crises and inform policies. Inclusion of COVID claims will ensure that the repliCATS project is tested on a range of social and behavioural science or SBS research areas, making it uniquely tailored for the critical evaluation of a broad spectrum of research claims.

The repliCATS project will run a virtual worksop from 2-8 September to assess these COVID-19 claims, and are looking for 100 participants.

Are you new to the repliCATS project? If so, you can watch our chief investigator, Prof Fiona Fidler introduce the project, or read more about what it means to assess research claims.

Express interest in workshop (closed) by Friday, 14 August

Expressions of interest in participating in our workshop are now open, but places are capped at 100 participants.

**UPDATE: EOIs closed on 14 August. Thank you so much for expressing interest. You will have received an e-mail letting you know if you have been confirmed for the workshop or if you are on a waitlist. Rolling offers were made on a first-come basis. If you have any questions, please contact us on**

The key details again (in case you missed them).

  • There are 100 x USD200 (AUD 280) assessment grants available for eligible participants.
  • You need to be available between 2-8 September, for ~ 6 hours over the week.
  • *Each participant will be asked to assess ten claims.
  • You can assess claims when it suits you.
  • You'll work in groups of 3-6 other participants, using our online platform.
  • If this is your first time assessing claims, don't worry. We'll run an intro session to help you get started, and be available to help you over the week.
  • *If you have already expressed interest before 03 August, you do not need to re-apply. We will contact you shortly with additional details about the workshop, including assessment grant eligibility.

Form not working? Click here.

National Science Week: Assessing the reliability of COVID research – a repliCATS webinar

As part of National Science Week, join the repliCATS project team for a peak into how and why we're assessing COVID-19 claims.

Responding to crises like the COVID-19 pandemic requires a strong evidence basis. But which scientific results are trustworthy? During this crisis there have been high profile cases of scientific misconduct and retractions and more general concerns about the quality of research being generated quickly. This is set against a backdrop of the ‘replication crisis’ which has shown low rates of replicability for research claims in several scientific disciplines.


Date: Thursday 20 August 2020
Time: 12:00pm - 1:30pm
Location: Webinar
Cost: Free

NB: this webinar is separate to the virtual workshop that we will run between 2-8 September.

For more information, and to register:

Reimagining trust in science panel (19 Aug), feat Fiona Fidler

Recent crises have prompted discussion on ‘Trust in Science’. This raises the question: is science the kind of thing that can be trusted?

Is ‘Trust in Science’ the same thing as trusting individual scientists or research organisations, or is it something else? How do we maintain ‘Trust in Science’ while holding a reasonable level of uncertainty about individual research results?

An understanding of ‘Trust in Science’ needs to incorporate the emotional and political engagements revealed y public attitudes, the contested nature of professional identity and authority in contemporary society, and recent work on the ‘replication crisis’. This History and Philosophy of Science seminar will discuss recent work on the philosophy of trust in science.

Speakers will include:

For more information, visit:

Monthly winners!

FINAL round winners: 1 - 30 July

This was our final repliCATS remote round for phase 1. And at the end, we reached our goal of assessing 3000 claims! It was a phenomenal effort, with 1840 assessments made in 571 claims.

This round returned to basics, awarding the three participants who assessed the most number of claims.

Congratulations to:

  • sugarglider_419, who assessed an incredible 211 claims(!) to win US$500

as well as

  • crocodile_280, who assessed 102 claims, and
  • echidna_219, who assessed 55 claims, to win US$250 each.

Round 6 winners: 1 - 30 June

This round was all about business and economics claims! In total, participants submitted 362 assessments in eligible claims. (And another 798 assessments in claims from other disciplines.)

Congratulations to:

  • sugarglider_419, who assessed a whopping 77 business and economics claims and wins US$500

as well as

  • cockatoo_202, who assessed 41, and
  • dingo_268, who assessed 28 claims, to win US$250 each.

Round 5 winners: 22 April - 22 May

We intended to run a lottery for everyone who completed 10 or more claims outside our virtual workshop. However, with so many of our wonderful participants concentrating on our virtual workshop workshop, no one was eligible for this month's prize.

Round 4 winners: 23 Mar - 22 Apr 2020

We ran a lottery, and had 17 participants who completed 10+ assessments over this period. However, for several reasons only two out of those 17 participants were eligible to win a prize.

A number of our participants fall under an ethics agreement which doesn't allow them to win these prizes, so they continue to win kudos and gratitude and this emoji: 😻!

Now to the winners -- it was a possum double act! Congrats to:

  • possum_396 and
  • possum_745

who both win US$250 for assessing 10+ claims in round 3!

Round 3 was a quieter round (which is to be expected given we had to reschedule so many planned workshops). We only had 39 participants who contributed assessments, but they made assessments across 601 claims!

Round 5 (23 Apr - 22 May) is another lottery! Click here for more information on this month's prizes!

Round 3 winners: 18 Feb - 22 Mar 2020

We ran a lottery, and had 23 participants who completed 10+ assessments over this period. However, for several reasons only two out of those 23 participants were eligible to win a prize.

A number of our participants fall under an ethics agreement which doesn't allow them to win these prizes, so instead they win kudos and gratitude and this emoji: 😻!

Now to the winners. Congrats to:

  • possum_396 and
  • goanna_685

who both win US$250 for assessing 10+ claims in round 3!

In total for round 3, we had 140 participants who contributed assessments across 471 claims!

Round 2 winners: 17 Jan - 18 Feb 2020

Congrats to:

  • numbat_339 for submitting 21 unique reasons to support their judgements over the round, who wins US$500!
  • possum_396 for the second most number -- 15 -- of reasons submitted (US$250)
  • and, thornydevil_715 gets third place with 12 (US$250).

Round 1 winners: 9 Dec - 17 Jan 2020

Our first round fo remote claims assessment ended on 17 Jan 2020.

A huge congrats to:

  • goanna_685 for assessing a whopping 30 claims, and winning US$800!
  • pobblebonk_711 for assessing 19 claims (US$250)
  • and a tie-break gave pobblebonk_699 third place (US$250).

Round 3 is a lottery! And closes on 17 March 2020. Click here for more information on this month's prizes!


Watch 📺 Fiona Fidler gives RIOT Science talk on all things repliCATS

Earlier this month, Fiona Fidler gave a talk about the repliCATS project, and some of our plans for the future of the project. If you missed it (including if you were asleep because it ran from 11am-12pm BST), you can re-watch it via this link (YouTube)!

Talk abstract

The repliCATS project evaluates published scientific research. As the acronym—Collaborative Assessments for Trustworthy Science—suggests, repliCATS is a group activity, centred around assessing the trustworthiness of research claims. Reviewers first make private individual assessments about a research claim—judging its comprehensibility, the prior plausibility of underlying effect, and its likely replicability. Reviewers then share their judgements and reasoning with group members, providing both new information and the opportunity for feedback and calibration. The group interrogates differences in opinion and explores counterfactuals. After discussion, there is a final opportunity for privately updating individual judgements. Importantly the repliCATS process is not consensus-driven – reviewers can disagree, and their ratings and probability judgements are mathematically aggregated into a final assessment. At the moment, the repliCATS platform exists primarily to predict replicability. Launched in January 2019 as part of the DARPA SCORE program, over 18 months repliCATS elicited group assessments and captured associated reasoning and discussion, for 3,000 published social scientific research claims in 8 disciplines (Business, Criminology, Economics, Education, Political Science, Psychology, Public Administration, and Sociology). The repliCATS team are now working to extend the platform beyond merely predicting replicability, to deliver a more comprehensive peer review protocol. Suspected advantages of a repliCATS process over traditional peer review include: inbuilt training and calibration; feedback that is intrinsically rewarding; an inherently interactive process, but one which does not implicitly rely on ‘consensus by fatigue’; and a process that actively encourages interrogation. This talk will present some preliminary findings, and discuss the future of the platform.

Number of posts found: 36