Watch 📺 Fiona Fidler gives RIOT Science talk on all things repliCATS

Earlier this month, Fiona Fidler gave a talk about the repliCATS project, and some of our plans for the future of the project. If you missed it (including if you were asleep because it ran from 11am-12pm BST), you can re-watch it via this link (YouTube)!

Talk abstract

The repliCATS project evaluates published scientific research. As the acronym—Collaborative Assessments for Trustworthy Science—suggests, repliCATS is a group activity, centred around assessing the trustworthiness of research claims. Reviewers first make private individual assessments about a research claim—judging its comprehensibility, the prior plausibility of underlying effect, and its likely replicability. Reviewers then share their judgements and reasoning with group members, providing both new information and the opportunity for feedback and calibration. The group interrogates differences in opinion and explores counterfactuals. After discussion, there is a final opportunity for privately updating individual judgements. Importantly the repliCATS process is not consensus-driven – reviewers can disagree, and their ratings and probability judgements are mathematically aggregated into a final assessment. At the moment, the repliCATS platform exists primarily to predict replicability. Launched in January 2019 as part of the DARPA SCORE program, over 18 months repliCATS elicited group assessments and captured associated reasoning and discussion, for 3,000 published social scientific research claims in 8 disciplines (Business, Criminology, Economics, Education, Political Science, Psychology, Public Administration, and Sociology). The repliCATS team are now working to extend the platform beyond merely predicting replicability, to deliver a more comprehensive peer review protocol. Suspected advantages of a repliCATS process over traditional peer review include: inbuilt training and calibration; feedback that is intrinsically rewarding; an inherently interactive process, but one which does not implicitly rely on ‘consensus by fatigue’; and a process that actively encourages interrogation. This talk will present some preliminary findings, and discuss the future of the platform.


💬 participant tweets we love


repliCATS monthly prizes and rules

Round 7 (final round!): 1 July - 30 July 2020

It is the final remote round for repliCATS in phase 1! So, we'll end how we started, by giving prizes to the three participants who assess the most number of total claims in July 2020, as follows:

US$500 - for the participant who assessed the most number of claims
US$250 - for the next two highest participants.

Round 6: 1 June - 30 June 2020 - assess the most business or economics claims!

In this round we'll calculate how many business or economics** claims people complete between 1 June - 30 June 2020 (midnight, AEST)

  • US$500 - for the person who completes the most*
  • US$250 - for the people who completed 2nd and 3rd most business or economics claims *

*A completed claim counts as submitting a round 2 assessment.

** Claims are considered Business (marketing) or Economics claims if they were from the journals specified the table in FAQs under "From which journals are the 3000 claims being chosen?"

--

Follow these links for:

*NB: each month the prizes might change.

--

How it works (V1 - 9 Dec 2019 V2 - 05 April 2020)

  • Prizes are awarded monthly, and the terms of the prize will be announced monthly when the prizes for the previous month are released
  • Prizes are based on participation in that month
  • Prizes are awarded to the top three participants based on monthly reward
  • There will always be a winner. Tie-break rules: in case of ties, participation metrics will be applied to the tie-break, e.g. number of claims, number of unique comments, number of up-votes
  • If tie-break rules are applied, the metrics used to determine winners will be announced
  • Winners will only be named by their platform avatar name (i.e. they will remain anonymous)
  • A list of winners each month will be published on repliCATS website [link to website], and posted to the platform community page
  • The first time you win, you will be awarded a “cat-as-trophy” badge which will appear on your profile in the community page

Eligibility

  • User has not violated the repliCATS_CodeofConduct
  • All repliCATS workshop participants are eligible for the prizes in the month of the workshop
  • No repliCATS paid staff members are eligible
  • Participants can win multiple times, in multiple categories (if applicable)

Spirit of participation

  • Your judgements and reasoning will contribute to the data being collected for the each of the 3,000 claims being assessed for the repliCATS project
  • We trust you won’t cheat i.e. each user will only have one account and no-one will share accounts
  • You will abide by the repliCATS_CodeofConduct

Payment

  • The repliCATS project will process winners’ payments in the next monthly financial cycle, after receiving necessary documentation from winner
    • If winners do not provide the necessary documentation within 2 months of notification or the repliCATS project phase completing (whichever comes first), they are no longer eligible for the prize (added 05 April 2020)
  • The repliCATS project will not incur any recipient’s bank fees
  • Payments will be subject to applicable exchange rate, payments are made in USD

Past months categories

December 2019 - US$1000 in prizes for assessing the most number of claims

In our first month of launching the remote group platform, we'll give prizes to the three participants who assess the most number of total claims in the period 9 Dec 2019 - 9 Jan 2020, as follows:

  • US$800 - for the participant who assessed the most number of claims
  • US$250 - for the next two highest participants.

January 2020 - US$1000 for the number of unique reasons provided

In this round from 17 Jan - 17 Feb 2020 we'll give prizes to the three participants who provide the most number of unique reasons as follows:

  • US$500 - for the participant who assessed the most number of claims*
  • US$250 - for the next two highest participants.*

February 2020 - complete 10 claims, US$250 lottery draw for four participants

In this round from 18 Feb - 17 22 Mar 2020 (midnight, AEST) we'll put all participants who complete at least 10 claims into a pool, and randomly draw four winners from that pool.

  • US$250 - for each person drawn this month*
    • A completed claim counts as submitting a round 2 assessment.

March 2020 - complete 10 or more claims, US$250 lottery for two four participants

In this round from 23 Mar 2020 - 22 April (midnight, AEST) we'll put all participants who complete at least 10 claims into a pool, and randomly draw four winners from that pool.

  • US$250 - for each person drawn this month*
    • A completed claim counts as submitting a round 2 assessment.

Round 5: 23 April - 22 May 2020 - complete 10 or more claims, US$250 lottery for up to four participants

In this round from 23 Apr 2020 - 22 May (midnight, AEST) we'll put all participants who complete at least 10 claims into a pool, and randomly draw up to four winners from that pool.

  • US$250 - for each person drawn this month*
    • A completed claim counts as submitting a round 2 assessment.

repliCATS workshop went virtual – over 550 claims assessed in the week

Our last phase 1 workshop was originally scheduled to run before the 2020 Society for Improving Psychological Sciences (SIPS) conference in Victoria, Canada. Then everything changed.

The repliCATS team created our first virtual workshop designed to accomodate all the original workshop participants scattered around over 10 timezones. What we ended up with was repliCATS2020 workshop week that ran from 12-19 May, where over 150 participants and facilitators contributed 2238 individual judgements, or the equivalent of 560 claims!

That was more than what we completed at the last SIPS face-to-face workshop in 2019.

Of course, that didn't stop participants re-creating their favourite bits of our face-to-face workshops.

The repliCATS team had a few surprises for participants though: we reprised our very popular closing ceremony and awards virtually, including a musical finale that we cannot reproduce here due to copyright issues :D. You'll just have to take Julia and Dax's words for it.

The repliCATS team have learned a lot from running this first virtual workshop, which required some significant adjustments compared to face-to-face equivalents. However, participant feedback has left us feeling that we responded successfully, and have a platform upon which we can run future workshops virtually without compromising on participant engagement. Of course some of this success is thanks to our participants themselves who engaged in this new format enthusiastically and with an open mind. So, all that is left to report is a big thank you to the SIPS community. We're humbled by the contribution this community has made across two huge face-to-face workshops in 2019 and 2020, contributing to 1100 of the 3000 claims we're aiming to assess!

What's next for repliCATS?

While this might have been the last big workshop the repliCATS team will run in phase 1 of the SCORE program, we have a little bit left to go. As on 22 June, we have assessments in over 2775 of the 3000 claims. We are hoping to complete assessments by early August 2020, and the repliCATS remote platform will remain live and operational for a bit longer.


Platform outage on 13 May. If you assessed claims between 12-13 May, please read this.

Hi repliCATS participants, some routine system maintenance being undertaken on the repliCATS platform in the afternoon on 13 May (Australian time) resulted in our database being wiped from the live platform.

We have identified the problem, and restored the platform shortly after, but we have irretrievably lost all of the assessments you submitted on the platform on 12 May and the first part of 13 May (Australian time). Your private information and demographic information is stored separately and has not been compromised in any way, but we’ve lost all any assessments submitted for a 24-36 hour period.

We're so sorry because we appreciate the work that goes into each assessment you make on the platform. We have never experienced a technical issue like this before and have no reason to expect a repeat occurrence.

You can re-do/re-submit assessments for the claims that you already assessed, if only if to re-enter the quantitative judgements for these lost claims.

If you have any questions or would like to discuss this further, please contact us: repliCATS-contact@unimelb.edu.au and we will get back to you as soon as possible.


Update: repliCATS workshops on 26+30 Mar postponed

**Update#2 in light of covid-19 (25 Mar) - These workshops on Thursday 26 March & Monday 30 March have been postponed indefinitely.

This will probably not come as a surprise to you in light of the continually evolving response to the COVID-19 pandemic, and closer to home, how university staff are needing to rapidly respond to teaching & research commitments in a remote environment.

We are currently planning how we can support our participants in a fully virtual world, and we would like your thoughts and feedback on this. Please e-mail, repliCATS-project@unimelb.edu.au **

--

**Update in light of covid-19 (13 Mar) - we've been advised that we should not hold these workshops as face-to-face events. Therefore, we will now run two virtual workshops on 26 March and 30 March starting at 10am AEST.**

--
We're running two virtual repliCATS workshops on:

  • 26 March 2020 (via zoom, starting at 10am)
  • 30 March 2020 (via zoom, starting at 10am)

All workshops will be livestreamed from 10am - 11.30am.

For more info & to register via eventbrite: https://www.eventbrite.com.au/e/replicats-workshop-month-aus-tickets-94336327495

Agenda for the workshop

  • 10.00am: Responding to the Replication Crisis: the repliCATS project – talk by Prof Fiona Fidler & Dr Bonnie Wintle
  • 10.40am: Q&A + break
  • 11:00am: Assess a claim using the repliCATS platform - getting started! We'll assess a claim using the repliCATS platform, so you can predict replicability of a claim as a group.
  • 11.30am: Wrap-up.

Who should take part?

You should attend if you are from/familiar with one of these disciplines listed above, and are:

  • open to learning more about replicability, open science and meta-research
  • keen to improve your peer review and error detection skills
  • interested in calibrating your judgements and reasoning against your peers
  • wanting to be part of one of the largest attempts to evaluate the reliability of the published evidence base in the social & behavioural sciences!

What are these workshops about?

We aim to estimate the replicability of published research claims in the social and behavioural science. Following an introduction to the replication crisis, and our research project, you will be able to read and evaluate published research claims in one of the following fields: Business research, Criminology, Economics, Education, Political Science, Psychology, Public Administration, and Sociology.

Workshops are fun, and you learn lots about research methods, critical thinking, and effective peer-review! Follow the convo @repliCATS #repliCATS.

Read more here.

https://twitter.com/cantabile/status/1191959379404349442?s=20

Read about other workshops we've run, including at SIPS2019 and AIMOS2019.


repliCATS workshop at SIPS2020

We're running a one-day workshop at SIPS2020 in Victoria, Canada.

Date: 20 June 2020
Time: All-day
Venue: In downtown Victoria (Canada)

We have 150 spots available, and up to 100 travel grants** worth US$550 each for participants travelling to the workshop (i.e. participants who live outside the immediate area).

For more details, and to apply for the repliCATS workshop: https://forms.gle/JhyHLmYyyg9uFxE4A (via Google form)

NB. If you intend to attend SIPS2020 (21-23 June) you will have to register separately.

**03 Mar 2020: please note that we have offered 100 travel grants, and all applications for grants are now on a waitlist.

--

For more information, you can contact us on repliCATS-project@unimelb.edu.au or DM us on twitter, @replicats


The new-look platform. What has changed?

Are you logging in to the repliCATS platform for the first time since a face-to-face workshop at SIPS, AIMOS or at your institution? You might notice a few things have changed (we hope for the better!):

  • Returning users will be prompted to update their password - we generated temporary passwords (for security reasons) and you should have an e-mail in your inbox with that temporary password
    • Can't find that e-mail? Send us an e-mail at repliCATS-contact@unimelb.edu.au
  • Pick claims you want to assess and get it started - you will now go straight into round 2.
    **We encourage you to submit your round 2 assessment straight away even if you are the first one to do a claim (you can re-submit these at any time)**

    • You'll get an e-mail prompt when the 5th person has completed round 2 of a claim you've assessed - this will allow you a final 72-hour window to update your round 2 estimates. As you might remember, we use a structured elicitation method called the IDEA protocol to assess claims. We aim for each claim to have 5 people assess the claim (round 1) and then update their assessment after seeing their groups' assessments and comments (round 2).
  • You can also join a claim that already has assessments - this means you might end up in a virtual group of people, so we encourage you to comment
  • You can filter claims by their status, or discipline - just pick the status and disciplines you want and click on the filter button on the top of the home screen.

Screen shot of platform

  • You can also quickly see the status of a claim by hovering over the little pie symbol - the number of people you see in the first box just tells you the total number of participants with access to that claim. The other three boxes show you the actual number of participants who have assessed the claim.

  • Claims you've assessed go to the bottom of the page - you can use the filter function or scroll to the very bottom of the list to see those claims which will have the "round 2" tag
  • There's a community page - this page includes some personal statistics of yours (including badges earned, and claims assessed). It also has a link to our news posts, twitter and reddit feeds. Get involved!
  • Links to "resources" and "glossary" pages - click on each to go to the relevant content on our website, including guides and videos to navigating the platform, information on the IDEA protocol, and handy video guides to statistical concepts.
  • We'll add a mix of new claims every month - every 4 weeks we'll refresh the claims available so there will always be something new for you to assess - unless we run out of claims first!
  • New users can now create their own account! - you won't see the home page or be able to assess a claim until you have completed the consent and demographics survey though.

Number of posts found: 38