Update: repliCATS workshops on 26+30 Mar postponed

**Update#2 in light of covid-19 (25 Mar) – These workshops on Thursday 26 March & Monday 30 March have been postponed indefinitely.

This will probably not come as a surprise to you in light of the continually evolving response to the COVID-19 pandemic, and closer to home, how university staff are needing to rapidly respond to teaching & research commitments in a remote environment.

We are currently planning how we can support our participants in a fully virtual world, and we would like your thoughts and feedback on this. Please e-mail, repliCATS-project@unimelb.edu.au **

**Update in light of covid-19 (13 Mar) – we’ve been advised that we should not hold these workshops as face-to-face events. Therefore, we will now run two virtual workshops on 26 March and 30 March starting at 10am AEST.**


We’re running two virtual repliCATS workshops on:

  • 26 March 2020 (via zoom, starting at 10am)
  • 30 March 2020 (via zoom, starting at 10am)

All workshops will be livestreamed from 10am – 11.30am.

For more info & to register via eventbrite: https://www.eventbrite.com.au/e/replicats-workshop-month-aus-tickets-94336327495

Agenda for the workshop

  • 10.00am: Responding to the Replication Crisis: the repliCATS project – talk by Prof Fiona Fidler & Dr Bonnie Wintle
  • 10.40am: Q&A + break
  • 11:00am: Assess a claim using the repliCATS platform – getting started! We’ll assess a claim using the repliCATS platform, so you can predict replicability of a claim as a group.
  • 11.30am: Wrap-up.

Who should take part?

You should attend if you are from/familiar with one of these disciplines listed above, and are:

  • open to learning more about replicability, open science and meta-research
  • keen to improve your peer review and error detection skills
  • interested in calibrating your judgements and reasoning against your peers
  • wanting to be part of one of the largest attempts to evaluate the reliability of the published evidence base in the social & behavioural sciences!

What are these workshops about?

We aim to estimate the replicability of published research claims in the social and behavioural science. Following an introduction to the replication crisis, and our research project, you will be able to read and evaluate published research claims in one of the following fields: Business research, Criminology, Economics, Education, Political Science, Psychology, Public Administration, and Sociology.

Workshops are fun, and you learn lots about research methods, critical thinking, and effective peer-review! Follow the convo @repliCATS #repliCATS.

Read more here.

Read about other workshops we’ve run, including at SIPS2019 and AIMOS2019.



repliCATS workshop at SIPS2020

We’re running a one-day workshop at SIPS2020 in Victoria, Canada.

Date: 20 June 2020
Time: All-day
Venue: In downtown Victoria (Canada)

We have 150 spots available, and up to 100 travel grants** worth US$550 each for participants travelling to the workshop (i.e. participants who live outside the immediate area).

For more details, and to apply for the repliCATS workshop: https://forms.gle/JhyHLmYyyg9uFxE4A (via Google form)

NB. If you intend to attend SIPS2020 (21-23 June) you will have to register separately.

**03 Mar 2020: please note that we have offered 100 travel grants, and all applications for grants are now on a waitlist.

For more information, you can contact us on repliCATS-project@unimelb.edu.au or DM us on twitter, @replicats


The new-look platform. What has changed?

Are you logging in to the repliCATS platform for the first time since a face-to-face workshop at SIPS, AIMOS or at your institution? You might notice a few things have changed (we hope for the better!):

  • Returning users will be prompted to update their password – we generated temporary passwords (for security reasons) and you should have an e-mail in your inbox with that temporary password
    • Can’t find that e-mail? Send us an e-mail at repliCATS-contact@unimelb.edu.au
  • Pick claims you want to assess and get it started – you will now go straight into round 2.
    **We encourage you to submit your round 2 assessment straight away even if you are the first one to do a claim (you can re-submit these at any time)**

    • You’ll get an e-mail prompt when the 5th person has completed round 2 of a claim you’ve assessed – this will allow you a final 72-hour window to update your round 2 estimates. As you might remember, we use a structured elicitation method called the IDEA protocol to assess claims. We aim for each claim to have 5 people assess the claim (round 1) and then update their assessment after seeing their groups’ assessments and comments (round 2).
  • You can also join a claim that already has assessments – this means you might end up in a virtual group of people, so we encourage you to comment
  • You can filter claims by their status, or discipline – just pick the status and disciplines you want and click on the filter button on the top of the home screen.

Screen shot of platform

  • You can also quickly see the status of a claim by hovering over the little pie symbol – the number of people you see in the first box just tells you the total number of participants with access to that claim. The other three boxes show you the actual number of participants who have assessed the claim.

  • Claims you’ve assessed go to the bottom of the page – you can use the filter function or scroll to the very bottom of the list to see those claims which will have the “round 2” tag
  • There’s a community page – this page includes some personal statistics of yours (including badges earned, and claims assessed). It also has a link to our news posts, twitter and reddit feeds. Get involved!
  • Links to “resources” and “glossary” pages – click on each to go to the relevant content on our website, including guides and videos to navigating the platform, information on the IDEA protocol, and handy video guides to statistical concepts.
  • We’ll add a mix of new claims every month – every 4 weeks we’ll refresh the claims available so there will always be something new for you to assess – unless we run out of claims first!
  • New users can now create their own account! – you won’t see the home page or be able to assess a claim until you have completed the consent and demographics survey though.

Listen 📻 Fiona Fidler guests on Everything Hertz podcast

If you missed it, Dan Quintana and James Heathers who run the very awesome (we think) Everything Hertz podcast recently interviewed our chief investigator Fiona Fidler live. They talk about:

  • The story behind repliCATS
  • Australia’s best export, Tim Tams, and
  • The SCORE project organised by DARPA amongst other things.

The episode is here:

Episode citation and permanent link:
Quintana, D.S., Heathers, J.A.J. (Hosts). (2019, October 21) “Predicting the replicability of research “, Everything Hertz [Audio podcast], DOI: 10.17605/OSF.IO/KZPYG, Retrieved from https://osf.io/kzpyg/


New milestone: 800+ claims assessed!

The repliCATS team have had a very busy November 2019!

AIMOS2019-repliCATS-workshopFirst, we ran a 100-person workshop before the inaugural Association for Interdisciplinary Meta-research and Open Science on 6 November 2019 at the University of Melbourne.

Of course, there were tim tams, prizes and a whole lot of energy at the pre-AIMOS workshop.

Then we were in Wellington (New Zealand) where we ran a smaller workshop with the lovely folk over in psychology the following week!

What’s next for the repliCATS team?

We’re busy tweaking our platform, and in early December 2019 we plan to launch a fully remote platform! This means groups of people can participate virtually from anywhere in the world – this could be as individuals looking for claims they are interested in assessing to journal clubs or discussion groups picking claims to work on together.

Want to find out more? Get in touch via repliCATS-project@unimelb.edu.au


Apply for a travel grant for next repliCATS workshop

We will be running an all-day workshop on 6 November 2019 at the University of Melbourne.

There are 55 travel grants of US$400 available (for participants outside of metropolitan Melbourne) and free registration to the first Association for Interdisciplinary Meta-research & Open Science (AIMOS) conference is included (for all participants).

Apply now to attend this full day workshop.

What will you do at the workshop?

The repliCATS project aims to estimate the replicability of published research claims in the social and behavioural science.

Working in small groups of 5-6, you will be asked to evaluate approximately a dozen social and behavioural science research claims. Each group will have its own facilitator, who will guide you through a structured elicitation protocol.

The workshop will be fully catered.

Why should you participate?

We recently ran a workshop in Rotterdam where we assessed 575 claims over two days! The 200+ workshop participants told us that they appreciated the opportunity to consider and evaluate many different scientific claims and gain exposure to a variety of research designs and approaches. Our hope is that your participation in the workshop will build/enhance the way you think critically about the evidence you are evaluating.

More information about the travel grants

There are 55 travel grants available to participants who live outside of metropolitan Melbourne (including remote Victoria, interstate and overseas). The amount to be paid will be USD 400 (~AUD 580). The travel award is conditional on your participation prior to and during the workshop. These grants also include free registration for the AIMOS conference. Further details about the travel grants can be found here.

Have questions?

You can contact Raquel Ashton by e-mailing repliCATS-project@unimelb.edu.au


Meet the team: Hannah Fraser

Hannah Fraser profile pictureHannah Fraser is the repliCATS research coordinator. In a project about gathering experts and facilitating their ability to assess the replicability of research, she sees her role as facilitating the efforts of the repliCATS team. A meta-facilitator for meta-research.

Hannah completed her PhD in the Quantitative and Applied Ecology group at the University of Melbourne and is passionate about improving the rigor, generalisability and usefulness of ecological research. This began as an exploration of the way researchers study woodland birds (birds that live and/or breed in Australia’s vanishing woodland habitats) and, inspired by FIona Fidler and the meta-researchers from psychology, medicine and economics, has now branched into studying reproducibility and replicability.

Fortunately, this makes her an excellent fit for the repliCATS project.

Aside from the small task of facilitating experts’ assessments of 3,000 research claims for the repliCATS project, Hannah’s research interests include:

  • Questionable research practices – identity, incidence, implications
  • Researchers views on replication studies – value, venues, vagaries
  • Many Analysts one ecology dataset, one environmental biology dataset – duplication, differences, divergence
  • Open science innovations – effectiveness, efficiency, endorsement.

Some of Hannah’s best thinking happens on long walks with colleagues, friends, family and dogs. To relax, she gets into home improvement (just dont ask her to paint your house) and loves to play board games with other aficionados.

Her top 3 board games are:

  1. Wingspan (a board game about birds – enough said)
  2. Small World (berserker orcs, yes please!)
  3. Sushi go party (because it’s fast and silly)

Hannah also discussed in Simine Vazire’s blog, sometimes i’m wrong, discussing the challenges facing ecology: https://sometimesimwrong.typepad.com/wrong/2019/07/status-part-i.html

You can follow Hannah on twitter @HannahSFraser


Milestone: first 575 claims assessed!

The first repliCATS elicitation workshop was held in Rotterdam on 5-6 July 2019, before the annual Society for the Improvement of Psychological Sciences (SIPS) conference. We are so grateful to the SIPS2019 organisers for accomodating us!

This workshop was the first test of our custom platform, as well as the deployment of the IDEA protocol to crowdsource assessments about the replicability of research claims in the social and behavioural sciences.

Here are some workshop statistics:

  • we had 156 participants and 30 facilitators working in groups of 5-6 people over the two days

  • at the end of which we assessed 575 (out of 3,000) research claims in psychology, marketing and experimental economics…
  • …that amounted to over 27,000 quantitative data points and 12,072 justifications or qualitative reasoning,
  • and in doing so, we consumed x32 packets of tim tams.

There were also some prizes!

What’s next for the repliCATS project?

The next big workshop for the repliCATS project will be held at the University of Melbourne on 5 November 2019, before the inaugural conference for the Association for Interdisciplinary Meta-research & Open Science (AIMOS).

There are x55 travel grants of US$450.00 available to subsidise participants from outside metropolitan Melbourne, interstate or overseas.

To find out more, and to express interest, visit: https://www.aimos2019conference.com.


Melbourne awarded up to US$6.5m to evaluate experts’ IDEAs about reproducibility and replicability

Media release | Wednesday, 3 April 2019
In response to the ‘replication crisis’ in a number of scientific fields, a major international research program seeks to develop Artificial Intelligence to help evaluate the credibility of scientific evidence we use to make decisions.

The University of Melbourne is currently the only Australian team selected by the US Government’s Defense Advanced Research Project Agency (DARPA) to work on the “Systematizing Confidence in Open Research and Evidence” or SCORE program.

With up to US$6.5m million in funding, the University’s team, Collaborative Assessment for Trustworthy Science or the repliCATS project, will assess the replicability of thousands of social and behavioural research claims.

This work will inform the AI component of SCORE. It will also help us understand how to recognise credible research.

The University’s team plans to crowdsource thousands of experts – working in psychology, sociology, criminology, economics, business, marketing, political science and education – to meet the SCORE challenge.

Associate Professor Fiona Fidler who is a reproducibility and Open Science expert will lead the University of Melbourne team. She said: “This is by far the most ambitious reproducibility project the social and behavioural sciences have seen.

“It will be a defining moment in how we understand the evidence base in the published literature of those fields.”

For the full release, visit: https://about.unimelb.edu.au/newsroom/home.

Media enquiries: Louise Bennet | 0412 975 350 | e.bennet@unimelb.edu.au


Number of posts found: 32