FAQs

One of the most frequently asked questions is, “Do I need to be an expert in any individual field to participate in the repliCATS project?” The answer is: No! To participate, you need to be 18 years or older, have completed or be completing an undergraduate degree, and be interested in evaluating research claims in scope for our project.

Below are blocks of frequently asked questions which may help you understand our project better:

If you still can’t find the answer to your question, you can contact us at repliCATS-contact@unimelb.edu.au.’

About SCORE claims

In this project we use the word "research claim" or "claim" in a very specific way.

A research claim is a single major finding from a published study (for example, a journal article), as well as details of the methods and results that support this finding. A research claim is not equivalent to an entire article. Sometimes the claim as described in the abstract does not exactly match the claim that is tested. In this case, you should consider the research claim to be that which is described in the inferential test.

Phase 1 of the SCORE program involved evaluating the replicability of a central claim in 3000 articles.

Phase 2 of the SCORE program involved evaluating 200 papers holistically, with a focus on a suite of credibility signals. As part of this effort, ~2000 claims or findings were also evaluated across these 200 papers.

The Center for Open Science (USA) are selecting the 3,000 research claims, as a subset of a larger set of 30,000 published papers in the social and behavioural sciences that are in scope for the SCORE program. These are:

  • criminology
  • economics
  • education
  • political science
  • psychology
  • public administration
  • marketing, and
  • sociology.

These claims will be drawn from the following journals.

Criminology

 

Marketing/Organisational Behaviour

  • Criminology
  • Law and Human Behavior

 

 

 

 

 

 

  •  Journal of Consumer Research
  • Journal of Marketing
  • Journal of Marketing Research
  • Journal of Organizational Behavior
  • Journal of the Academy of Marketing Science
  • Organizational Behavior and Human Decision Processes
     

Economics

 

Political Science

  • American Economic Journal: Applied Economics
  • American Economic Revie
  • Econometrica 
  • Experimental Economics
  • Journal of Finance
  • Journal of Financial Economics
  • Journal of Labor Economics
  • Quarterly Journal of Economics
  • Review of Financial Studies

 

  • American Political Science Review
  • British Journal of Political Science
  • Comparative Political Studies
  • Journal of Conflict Resolution
  • Journal of Experimental Political Science
  • Journal of Political Economy
  • World Development
  • World Politics

 

     

Education

 

Psychology

  • American Educational Research Journal
  • Computers and Education
  • Contemporary Educational Psychology
  • Educational Researcher
  • Exceptional Children
  • Journal of Educational Psychology
  • Learning and Instruction

 

 

 

 

 

 

  • Child Development
  • Clinical Psychological Science
  • Cognition
  • European Journal of Personality
  • Evolution and Human Behavior
  • Journal of Applied Psychology
  • Journal of Consulting and Clinical Psychology
  • Journal of Environmental Psychology
  • Journal of Experimental Psychology: General
  • Journal of Experimental Social Psychology
  • Journal of Personality and Social Psychology
  • Psychological Science
     

Health related

 

Public Administration

  • Health Psychology
  • Psychological Medicine
  • Social Science and Medicine

 

  • Journal of Public Administration Research and Theory
  • Public Administration Review
     

Management

 

Sociology

  • Academy of Management Journal
  • Journal of Business Research
  • Journal of Management
  • Leadership Quarterly
  • Management Science
  • Organization Science

 

  • American Journal of Sociology
  • American Sociological Review
  • Demography
  • European Sociological Review
  • Journal of Marriage and Family
  • Social Forces

The Center for Open Science (USA) are selecting the 3,000 research claims from the following journals. 

Criminology

 

Marketing/Organisational Behaviour

  • Criminology
  • Law and Human Behavior

 

 

 

 

 

 

  •  Journal of Consumer Research
  • Journal of Marketing
  • Journal of Marketing Research
  • Journal of Organizational Behavior
  • Journal of the Academy of Marketing Science
  • Organizational Behavior and Human Decision Processes
     

Economics

 

Political Science

  • American Economic Journal: Applied Economics
  • American Economic Revie
  • Econometrica 
  • Experimental Economics
  • Journal of Finance
  • Journal of Financial Economics
  • Journal of Labor Economics
  • Quarterly Journal of Economics
  • Review of Financial Studies

 

  • American Political Science Review
  • British Journal of Political Science
  • Comparative Political Studies
  • Journal of Conflict Resolution
  • Journal of Experimental Political Science
  • Journal of Political Economy
  • World Development
  • World Politics

 

     

Education

 

Psychology

  • American Educational Research Journal
  • Computers and Education
  • Contemporary Educational Psychology
  • Educational Researcher
  • Exceptional Children
  • Journal of Educational Psychology
  • Learning and Instruction

 

 

 

 

 

 

  • Child Development
  • Clinical Psychological Science
  • Cognition
  • European Journal of Personality
  • Evolution and Human Behavior
  • Journal of Applied Psychology
  • Journal of Consulting and Clinical Psychology
  • Journal of Environmental Psychology
  • Journal of Experimental Psychology: General
  • Journal of Experimental Social Psychology
  • Journal of Personality and Social Psychology
  • Psychological Science
     

Health related

 

Public Administration

  • Health Psychology
  • Psychological Medicine
  • Social Science and Medicine

 

  • Journal of Public Administration Research and Theory
  • Public Administration Review
     

Management

 

Sociology

  • Academy of Management Journal
  • Journal of Business Research
  • Journal of Management
  • Leadership Quarterly
  • Management Science
  • Organization Science

 

  • American Journal of Sociology
  • American Sociological Review
  • Demography
  • European Sociological Review
  • Journal of Marriage and Family
  • Social Forces

In phase 2 of the SCORE program, the scope of the program was expanded from evaluating the replicability of a single claim published in a paper to evaluating the credibility of published papers more holistically.

In phase 2, which began in 2021, 200 "bushel" papers were evaluated holistically. Participants working in IDEA groups evaluated the seven credibility signals:

  1. Comprehensibility
  2. Transparency
  3. Plausibility
  4. Robustness
  5. Replicability
  6. Generalisability
  7. Validity
    1. Statistical
    2. Design
    3. Conclusion

before making an eight aggregate credibility judgement.

To find out more, you can view our resources page or watch short videos on our YouTube channel.

About the project & SCORE program

Replication, along with many other related terms like reproducibility, are contested. That is, they have multiple meanings.

For this project, our working definition of a direct replication is a replication that follows the methods of the original study with a high degree of similarity, varying aspects only where there is a high degree of confidence that they are not relevant to the research claim. The aim of a direct replication is to improve confidence in the reliability and validity of an experimental finding by starting to account for things such as sampling error, measurement artefacts, and questionable research practices.

Yes. The “CATS” in repliCATS is an acronym for Collaborative Assessment for Trustworthy Science.

We are an interdisciplinary research team based predominantly at the University of Melbourne. You can meet the research team here.

We are developing and testing methods to elicit accurate predictions about the likely replicability of published research claims in the social sciences. As you may be aware, some large scale, crowdsourced replication projects have alerted us to the possibility that replication success rates may be lower than we once thought. Our project will assist with the development of efficient methods for critically evaluating the evidence base of social science research.

The IDEA protocol is a structured protocol for eliciting expert judgments based on the Delphi process. IDEA stands for Investigate, Discuss, Estimate, Aggregate.

Applying the IDEA protocol involves recruiting a diverse group of experts to answer questions with probabilistic or quantitative responses. Experts first investigate the questions and clarify meanings of terms, reducing variation caused by linguistic ambiguity. They provide their private, individual estimate, using a 3- or 4-step method (highest, lowest, best guess). The group’s private estimates are revealed; group members can then see how their estimates sit in relation to others. The group discusses the results, shares information and cross-examines reasoning and evidence. Group members individually provide a second and final private estimate. These second-round estimates are then combined using mathematical aggregation.

The strengths of the IDEA protocol in eliciting predictions of the likely replicability of research claims lies in the stepped, structured nature of the approach. The feedback and discussion components of the IDEA protocol both function to reduce overconfidence in estimates, which is a known limitation of expert elicitation methods. The discussion component of the IDEA protocol also allows experts to account for private information which could substantially alter the likely replicability assessment of a research claim.

This protocol, developed at the University of Melbourne, has been found to improve judgements under uncertainty. IDEA stands for “Investigate”, “Discuss”, “Estimate” and “Aggregate”, the four steps in the process of this elicitation.

More information on the IDEA protocol can be found ​here​ (external link to: Methods Blog).

Yes! We hope to collect judgements from a diverse range of participants in the following broad disciplines:

  • business research
  • criminology
  • economics
  • education
  • political science
  • psychology
  • public administration
  • marketing, and
  • sociology.

If you are interested in participating, find out more about participating and signing upGet involved, or contact us at repliCATS-project@unimelb.edu.au to ask us for more information.

Your participation will help us to refine methods for predicting the replicability of social and behavioural science claims. Any data we collect could drastically change the way we think about published research evidence. For individuals participants, it also provides the opportunity to develop your skills, through peer interactions, and to become more critical consumers of the research literature.

Our first workshop was held in July 2019 in Rotterdam, with over 200 participants over two days. Our participants reported that they found the experience valuable and enjoyed thinking about replicability of published research evidence. Additionally, early career researchers said participating in the workshop improved their critical appraisal (or peer review) skills, and they enjoyed comparing their judgements against diverse individuals (from discipline to career stage) in their group.

The Center for Open Science (USA) are selecting the 3,000 research claims, as a subset of a larger set of 30,000 published papers in the social and behavioural sciences that are in scope for the SCORE program. These are:

  • criminology
  • economics
  • education
  • political science
  • psychology
  • public administration
  • marketing, and
  • sociology.

These claims will be drawn from the following journals.

Criminology

 

Marketing/Organisational Behaviour

  • Criminology
  • Law and Human Behavior

 

 

 

 

 

 

  •  Journal of Consumer Research
  • Journal of Marketing
  • Journal of Marketing Research
  • Journal of Organizational Behavior
  • Journal of the Academy of Marketing Science
  • Organizational Behavior and Human Decision Processes
     

Economics

 

Political Science

  • American Economic Journal: Applied Economics
  • American Economic Revie
  • Econometrica 
  • Experimental Economics
  • Journal of Finance
  • Journal of Financial Economics
  • Journal of Labor Economics
  • Quarterly Journal of Economics
  • Review of Financial Studies

 

  • American Political Science Review
  • British Journal of Political Science
  • Comparative Political Studies
  • Journal of Conflict Resolution
  • Journal of Experimental Political Science
  • Journal of Political Economy
  • World Development
  • World Politics

 

     

Education

 

Psychology

  • American Educational Research Journal
  • Computers and Education
  • Contemporary Educational Psychology
  • Educational Researcher
  • Exceptional Children
  • Journal of Educational Psychology
  • Learning and Instruction

 

 

 

 

 

 

  • Child Development
  • Clinical Psychological Science
  • Cognition
  • European Journal of Personality
  • Evolution and Human Behavior
  • Journal of Applied Psychology
  • Journal of Consulting and Clinical Psychology
  • Journal of Environmental Psychology
  • Journal of Experimental Psychology: General
  • Journal of Experimental Social Psychology
  • Journal of Personality and Social Psychology
  • Psychological Science
     

Health related

 

Public Administration

  • Health Psychology
  • Psychological Medicine
  • Social Science and Medicine

 

  • Journal of Public Administration Research and Theory
  • Public Administration Review
     

Management

 

Sociology

  • Academy of Management Journal
  • Journal of Business Research
  • Journal of Management
  • Leadership Quarterly
  • Management Science
  • Organization Science

 

  • American Journal of Sociology
  • American Sociological Review
  • Demography
  • European Sociological Review
  • Journal of Marriage and Family
  • Social Forces

The Center for Open Science (USA) are selecting the 3,000 research claims from the following journals. 

Criminology

 

Marketing/Organisational Behaviour

  • Criminology
  • Law and Human Behavior

 

 

 

 

 

 

  •  Journal of Consumer Research
  • Journal of Marketing
  • Journal of Marketing Research
  • Journal of Organizational Behavior
  • Journal of the Academy of Marketing Science
  • Organizational Behavior and Human Decision Processes
     

Economics

 

Political Science

  • American Economic Journal: Applied Economics
  • American Economic Revie
  • Econometrica 
  • Experimental Economics
  • Journal of Finance
  • Journal of Financial Economics
  • Journal of Labor Economics
  • Quarterly Journal of Economics
  • Review of Financial Studies

 

  • American Political Science Review
  • British Journal of Political Science
  • Comparative Political Studies
  • Journal of Conflict Resolution
  • Journal of Experimental Political Science
  • Journal of Political Economy
  • World Development
  • World Politics

 

     

Education

 

Psychology

  • American Educational Research Journal
  • Computers and Education
  • Contemporary Educational Psychology
  • Educational Researcher
  • Exceptional Children
  • Journal of Educational Psychology
  • Learning and Instruction

 

 

 

 

 

 

  • Child Development
  • Clinical Psychological Science
  • Cognition
  • European Journal of Personality
  • Evolution and Human Behavior
  • Journal of Applied Psychology
  • Journal of Consulting and Clinical Psychology
  • Journal of Environmental Psychology
  • Journal of Experimental Psychology: General
  • Journal of Experimental Social Psychology
  • Journal of Personality and Social Psychology
  • Psychological Science
     

Health related

 

Public Administration

  • Health Psychology
  • Psychological Medicine
  • Social Science and Medicine

 

  • Journal of Public Administration Research and Theory
  • Public Administration Review
     

Management

 

Sociology

  • Academy of Management Journal
  • Journal of Business Research
  • Journal of Management
  • Leadership Quarterly
  • Management Science
  • Organization Science

 

  • American Journal of Sociology
  • American Sociological Review
  • Demography
  • European Sociological Review
  • Journal of Marriage and Family
  • Social Forces

You can express interest in assessing claims, or subscribe to our mailing list. 

You can also follow us on twitter, @replicats.

Or, you can send us an e-mail at repliCATS-contact@unimelb.edu.au.

Answering claims – help please!

Great! You can create an account and log on to our platform by visiting: https://score.eresearch.unimelb.edu.au 

The first step when you create an account will be a short survey which includes a plain language statement, obtaining your consent, and some demographic information about you.

You might also find the following pages useful to:

There is a load of information we've prepared to help you navigate the website, as well as get comfortable about answering claims.

Check out the resources page for videos, handy guides, and a whole bunch of additional information.

You can spend as much time as you want, however, we suggest that all up you shouldn't spend more than 30 minutes per claim, for both round one and round two. 

During workshops, we use a model of 10 minutes for round one, 15 minutes for discussion and 5 minutes to update and submit your round two.

For virtual groups, the discussion is via commenting and up or down votes. So, if you are working solo or completely virtually (i.e. no real time discussion), we suggest that you spend around 10-15 minutes to complete round one, which includes perusing the paper (there is a link in the lefthand side panel), and spending a bit of extra time to writing down your reasoning. This will help you and the other participants for round two.

Sometimes the claim text (in bold) indicates a claim different from that reported in the inferential test results. In this case, all your answers should relate to the inferential test results.

Also, some papers have very few claims and deploy very few tests, others have dozens or hundreds – evaluating ‘all the claims made’ would be incredibly unwieldy. Remember: you only need to evaluate the central claim listed in the claim panel on the right sidebar.

Only if you feel like it. We think it is a good idea to look at the paper, and read as much of the paper as is sufficient to help you evaluate the replicability of the central claim presented in the platform.

For each claim you evaluate we ask you to estimate the probability that direct replications of this study would find a statistically significant effect in the same direction as the original claim (0-100%). 0 means that you think that a direct replication would never succeed, even by chance. 100 means that you think that a direct replication would never fail, even by chance.

To answer this question, imagine 100 replications of the original study, combined to produce a single, overall replication estimate (e.g., a meta-analysis with no publication bias). How likely is it that the overall estimate will be similar to the original? Note that all replication studies are ‘direct’ replications, i.e., they constitute reasonable tests of the original claim, despite minor changes that may have occurred in methods or procedure. And all replication studies have high power (90% power to detect an effect 50-75% of the original effect size with alpha=0.05, two-sided).

In the text box, we also ask you to note what factors influenced your judgement about whether the claim would successfully replicate, or not. For each of the following, list some factors (dot points are fine):

  • For your lower bound, think of factors that make successful replications unlikely
  • For your upper bound, think of factors that make successful replications likely.
  • For your best estimate, consider the balance of factors.

How will this research claim be replicated?

We cannot answer this question precisely. The selection and replication of claims for the SCORE program is being overseen by the Centre for Open Science, independently of the repliCATS project. See here for more details about this part of the SCORE program. 

For SCORE, the intent of a direct replication is to follow the methods of the original study with a high degree of similarity, varying aspects only where there is a high degree of confidence that they are not relevant to the research claim being investigated. However, it is generally impossible to follow a study precisely, and the question as to which aspects matter is a judgement call.

Our best advice is to imagine what kinds of decisions you would face if you were asked to replicate this research claim, and then to consider the effects of making different choices for these decisions. This is one reason why we ask you to consider a set of 100 replications when making your assessment – even though they are 100 direct replications, each might be slightly different. You should consider the effect of these slight variations when making your estimate.

In some instances, a replication may not be able to collect new data, for example, if the claim relates to a specific historical event, like an election. In this case you should consider the different choices that could be made in analysing the data. Again, you should consider the effect of slight variations in these choices when making your estimate of replicability. 

Yes there is, click here repliCATS glossary.

If you think there's a term missing or defined incorrectly, send us an e-mail to: repliCATS-contact@unimelb.edu.au

We cannot answer this question precisely. The selection and replication of claims for the SCORE program is being overseen by the Centre for Open Science, independently of the repliCATS project. See here for more details about this part of the SCORE program. 

For SCORE, the intent of a direct replication is to follow the methods of the original study with a high degree of similarity, varying aspects only where there is a high degree of confidence that they are not relevant to the research claim being investigated. However, it is generally impossible to follow a study precisely, and the question as to which aspects matter is a judgement call.

Our best advice is to imagine what kinds of decisions you would face if you were asked to replicate this research claim, and then to consider the effects of making different choices for these decisions. This is one reason why we ask you to consider a set of 100 replications when making your assessment – even though they are 100 direct replications, each might be slightly different. You should consider the effect of these slight variations when making your estimate.

In some instances, a replication may not be able to collect new data, for example, if the claim relates to a specific historical event, like an election. In this case you should consider the different choices that could be made in analysing the data. Again, you should consider the effect of slight variations in these choices when making your estimate of replicability. 

Platform troubleshooting

No, sorry! The online platform works best on a laptop or PC.
We built the platform to be most compatible with Google Chrome. Safari seems to misbehave.
Only if you want to. All responses have a save functionality at the question-level. It allows you to save as you go. This controls against losing information should your browser crash, or if you want to think about it/return to the question before you submit it to us.

$1000 monthly prizes

We release new claims on the 17th of every month, starting from January 2020 to 17 June 2020.