FAQs

One of the most frequently asked questions is, “Do I need to be an expert in any individual field to participate in the repliCATS project?” The answer is: No! To participate, you need to be 18 years or older, have completed or be completing an undergraduate degree, and be interested in evaluating research claims in scope for our project.

Below are blocks of frequently asked questions which may help you understand our project and evaluating claims better:

If you still can’t find the answer to your question, you can contact us at repliCATS-contact@unimelb.edu.au.

About SCORE claims

  • What is a research claim or claim?

    In this project we use the word "research claim" or "claim" in a very specific way.

    A research claim is a single major finding from a published study (for example, a journal article), as well as details of the methods and results that support this finding. A research claim is not equivalent to an entire article. Sometimes the claim as described in the abstract does not exactly match the claim that is tested. In this case, you should consider the research claim to be that which is described in the inferential test, as the next stage of SCORE will focus on testing the replicability of the test results only.

  • How are the 3000 claims chosen?

    The Center for Open Science (USA) are selecting the 3,000 research claims, as a subset of a larger set of 30,000 published papers in the social and behavioural sciences that are in scope for the SCORE program. These are:

    • criminology
    • economics
    • education
    • political science
    • psychology
    • public administration
    • marketing, and
    • sociology.

    These claims will be drawn from the following journals.

    Criminology

     

    Marketing/Organisational Behaviour

    • Criminology
    • Law and Human Behavior

     

     

     

     

     

     

    •  Journal of Consumer Research
    • Journal of Marketing
    • Journal of Marketing Research
    • Journal of Organizational Behavior
    • Journal of the Academy of Marketing Science
    • Organizational Behavior and Human Decision Processes
         

    Economics

     

    Political Science

    • American Economic Journal: Applied Economics
    • American Economic Revie
    • Econometrica 
    • Experimental Economics
    • Journal of Finance
    • Journal of Financial Economics
    • Journal of Labor Economics
    • Quarterly Journal of Economics
    • Review of Financial Studies

     

    • American Political Science Review
    • British Journal of Political Science
    • Comparative Political Studies
    • Journal of Conflict Resolution
    • Journal of Experimental Political Science
    • Journal of Political Economy
    • World Development
    • World Politics

     

         

    Education

     

    Psychology

    • American Educational Research Journal
    • Computers and Education
    • Contemporary Educational Psychology
    • Educational Researcher
    • Exceptional Children
    • Journal of Educational Psychology
    • Learning and Instruction

     

     

     

     

     

     

    • Child Development
    • Clinical Psychological Science
    • Cognition
    • European Journal of Personality
    • Evolution and Human Behavior
    • Journal of Applied Psychology
    • Journal of Consulting and Clinical Psychology
    • Journal of Environmental Psychology
    • Journal of Experimental Psychology: General
    • Journal of Experimental Social Psychology
    • Journal of Personality and Social Psychology
    • Psychological Science
         

    Health related

     

    Public Administration

    • Health Psychology
    • Psychological Medicine
    • Social Science and Medicine

     

    • Journal of Public Administration Research and Theory
    • Public Administration Review
         

    Management

     

    Sociology

    • Academy of Management Journal
    • Journal of Business Research
    • Journal of Management
    • Leadership Quarterly
    • Management Science
    • Organization Science

     

    • American Journal of Sociology
    • American Sociological Review
    • Demography
    • European Sociological Review
    • Journal of Marriage and Family
    • Social Forces

  • From which journals are the 3000 claims being chosen?

    The Center for Open Science (USA) are selecting the 3,000 research claims from the following journals. 

    Criminology

     

    Marketing/Organisational Behaviour

    • Criminology
    • Law and Human Behavior

     

     

     

     

     

     

    •  Journal of Consumer Research
    • Journal of Marketing
    • Journal of Marketing Research
    • Journal of Organizational Behavior
    • Journal of the Academy of Marketing Science
    • Organizational Behavior and Human Decision Processes
         

    Economics

     

    Political Science

    • American Economic Journal: Applied Economics
    • American Economic Revie
    • Econometrica 
    • Experimental Economics
    • Journal of Finance
    • Journal of Financial Economics
    • Journal of Labor Economics
    • Quarterly Journal of Economics
    • Review of Financial Studies

     

    • American Political Science Review
    • British Journal of Political Science
    • Comparative Political Studies
    • Journal of Conflict Resolution
    • Journal of Experimental Political Science
    • Journal of Political Economy
    • World Development
    • World Politics

     

         

    Education

     

    Psychology

    • American Educational Research Journal
    • Computers and Education
    • Contemporary Educational Psychology
    • Educational Researcher
    • Exceptional Children
    • Journal of Educational Psychology
    • Learning and Instruction

     

     

     

     

     

     

    • Child Development
    • Clinical Psychological Science
    • Cognition
    • European Journal of Personality
    • Evolution and Human Behavior
    • Journal of Applied Psychology
    • Journal of Consulting and Clinical Psychology
    • Journal of Environmental Psychology
    • Journal of Experimental Psychology: General
    • Journal of Experimental Social Psychology
    • Journal of Personality and Social Psychology
    • Psychological Science
         

    Health related

     

    Public Administration

    • Health Psychology
    • Psychological Medicine
    • Social Science and Medicine

     

    • Journal of Public Administration Research and Theory
    • Public Administration Review
         

    Management

     

    Sociology

    • Academy of Management Journal
    • Journal of Business Research
    • Journal of Management
    • Leadership Quarterly
    • Management Science
    • Organization Science

     

    • American Journal of Sociology
    • American Sociological Review
    • Demography
    • European Sociological Review
    • Journal of Marriage and Family
    • Social Forces

Answering claims – help please!

  • I want to assess claims. What do I need to do?

    Great! You can create an account and log on to our platform by visiting: https://score.eresearch.unimelb.edu.au 

    The first step when you create an account will be a short survey which includes a plain language statement, obtaining your consent, and some demographic information about you.

    You might also find the following pages useful to:

  • I'm on the platform, how do I assess a claim?

    There is a load of information we've prepared to help you navigate the website, as well as get comfortable about answering claims.

    Check out the resources page for videos, handy guides, and a whole bunch of additional information.

  • How long should I spend evaluating a claim?

    You can spend as much time as you want, however, we suggest that all up you shouldn't spend more than 30 minutes per claim, for both round one and round two. 

    During workshops, we use a model of 10 minutes for round one, 15 minutes for discussion and 5 minutes to update and submit your round two.

    For virtual groups, the discussion is via commenting and up or down votes. So, if you are working solo or completely virtually (i.e. no real time discussion), we suggest that you spend around 10-15 minutes to complete round one, which includes perusing the paper (there is a link in the lefthand side panel), and spending a bit of extra time to writing down your reasoning. This will help you and the other participants for round two.

  • There seems to be multiple claims in the paper. Which one do I evaluate?

    Sometimes the claim text (in bold) indicates a claim different from that reported in the inferential test results. In this case, all your answers should relate to the inferential test results.

    Also, some papers have very few claims and deploy very few tests, others have dozens or hundreds – evaluating ‘all the claims made’ would be incredibly unwieldy. Remember: you only need to evaluate the central claim listed in the claim panel on the right sidebar.

  • Do you have guides or any resources I can access to help me answer claims?
  • Do I need to read the whole paper that is linked to the claim?

    Only if you feel like it. We think it is a good idea to look at the paper, and read as much of the paper as is sufficient to help you evaluate the replicability of the central claim presented in the platform.

  • How do I approach answering the replicability question?

    For each claim you evaluate we ask you to estimate the probability that direct replications of this study would find a statistically significant effect in the same direction as the original claim (0-100%). 0 means that you think that a direct replication would never succeed, even by chance. 100 means that you think that a direct replication would never fail, even by chance.

    To answer this question, imagine 100 replications of the original study, combined to produce a single, overall replication estimate (e.g., a meta-analysis with no publication bias). How likely is it that the overall estimate will be similar to the original? Note that all replication studies are ‘direct’ replications, i.e., they constitute reasonable tests of the original claim, despite minor changes that may have occurred in methods or procedure. And all replication studies have high power (90% power to detect an effect 50-75% of the original effect size with alpha=0.05, two-sided).

    In the text box, we also ask you to note what factors influenced your judgement about whether the claim would successfully replicate, or not. For each of the following, list some factors (dot points are fine):

    • For your lower bound, think of factors that make successful replications unlikely
    • For your upper bound, think of factors that make successful replications likely.
    • For your best estimate, consider the balance of factors.

    How will this research claim be replicated?

    We cannot answer this question precisely. The selection and replication of claims for the SCORE program is being overseen by the Centre for Open Science, independently of the repliCATS project. See here for more details about this part of the SCORE program. 

    For SCORE, the intent of a direct replication is to follow the methods of the original study with a high degree of similarity, varying aspects only where there is a high degree of confidence that they are not relevant to the research claim being investigated. However, it is generally impossible to follow a study precisely, and the question as to which aspects matter is a judgement call.

    Our best advice is to imagine what kinds of decisions you would face if you were asked to replicate this research claim, and then to consider the effects of making different choices for these decisions. This is one reason why we ask you to consider a set of 100 replications when making your assessment – even though they are 100 direct replications, each might be slightly different. You should consider the effect of these slight variations when making your estimate.

    In some instances, a replication may not be able to collect new data, for example, if the claim relates to a specific historical event, like an election. In this case you should consider the different choices that could be made in analysing the data. Again, you should consider the effect of slight variations in these choices when making your estimate of replicability. 

  • I don't understand a term on the platform. Is there a glossary?

    Yes there is, click here repliCATS glossary.

    If you think there's a term missing or defined incorrectly, send us an e-mail to: repliCATS-contact@unimelb.edu.au

  • How will a given research claim be replicated?

    We cannot answer this question precisely. The selection and replication of claims for the SCORE program is being overseen by the Centre for Open Science, independently of the repliCATS project. See here for more details about this part of the SCORE program. 

    For SCORE, the intent of a direct replication is to follow the methods of the original study with a high degree of similarity, varying aspects only where there is a high degree of confidence that they are not relevant to the research claim being investigated. However, it is generally impossible to follow a study precisely, and the question as to which aspects matter is a judgement call.

    Our best advice is to imagine what kinds of decisions you would face if you were asked to replicate this research claim, and then to consider the effects of making different choices for these decisions. This is one reason why we ask you to consider a set of 100 replications when making your assessment – even though they are 100 direct replications, each might be slightly different. You should consider the effect of these slight variations when making your estimate.

    In some instances, a replication may not be able to collect new data, for example, if the claim relates to a specific historical event, like an election. In this case you should consider the different choices that could be made in analysing the data. Again, you should consider the effect of slight variations in these choices when making your estimate of replicability. 

Platform troubleshooting

  • Can I use a tablet or handheld devide?

    No, sorry! The online platform works best on a laptop or PC.

  • My browser doesn't seem to be working

    We built the platform to be most compatible with Google Chrome. Safari seems to misbehave.

  • Do I need to save as I go?

    Only if you want to. All responses have a save functionality at the question-level. It allows you to save as you go. This controls against losing information should your browser crash, or if you want to think about it/return to the question before you submit it to us.

$1000 monthly prizes

About the project & SCORE program

  • What is “replication” as defined for this project?

    Replication, along with many other related terms like reproducibility, are contested. That is, they have multiple meanings.

    For this project, our working definition of a direct replication is a replication that follows the methods of the original study with a high degree of similarity, varying aspects only where there is a high degree of confidence that they are not relevant to the research claim. The aim of a direct replication is to improve confidence in the reliability and validity of an experimental finding by starting to account for things such as sampling error, measurement artefacts, and questionable research practices.

  • Does repliCATS stand for something?

    Yes. The “CATS” in repliCATS is an acronym for Collaborative Assessment for Trustworthy Science.

  • Who is part of your research team?

    We are an interdisciplinary research team based predominantly at the University of Melbourne. You can meet the research team here.

  • What are the aims of the repliCATS project?

    We are developing and testing methods to elicit accurate predictions about the likely replicability of published research claims in the social sciences. As you may be aware, some large scale, crowdsourced replication projects have alerted us to the possibility that replication success rates may be lower than we once thought. Our project will assist with the development of efficient methods for critically evaluating the evidence base of social science research.

  • What is the IDEA protocol?

    The IDEA protocol is a structured protocol for eliciting expert judgments based on the Delphi process. IDEA stands for Investigate, Discuss, Estimate, Aggregate.

    Applying the IDEA protocol involves recruiting a diverse group of experts to answer questions with probabilistic or quantitative responses. Experts first investigate the questions and clarify meanings of terms, reducing variation caused by linguistic ambiguity. They provide their private, individual estimate, using a 3- or 4-step method (highest, lowest, best guess). The group’s private estimates are revealed; group members can then see how their estimates sit in relation to others. The group discusses the results, shares information and cross-examines reasoning and evidence. Group members individually provide a second and final private estimate. These second-round estimates are then combined using mathematical aggregation.

    The strengths of the IDEA protocol in eliciting predictions of the likely replicability of research claims lies in the stepped, structured nature of the approach. The feedback and discussion components of the IDEA protocol both function to reduce overconfidence in estimates, which is a known limitation of expert elicitation methods. The discussion component of the IDEA protocol also allows experts to account for private information which could substantially alter the likely replicability assessment of a research claim.

    This protocol, developed at the University of Melbourne, has been found to improve judgements under uncertainty. IDEA stands for “Investigate”, “Discuss”, “Estimate” and “Aggregate”, the four steps in the process of this elicitation.

    More information on the IDEA protocol can be found ​here​ (external link to: Methods Blog).

  • Can I participate in this project?

    Yes! We hope to collect judgements from a diverse range of participants in the following broad disciplines:

    • business research
    • criminology
    • economics
    • education
    • political science
    • psychology
    • public administration
    • marketing, and
    • sociology.

    If you are interested in participating, find out more about participating and signing upGet involved, or contact us at repliCATS-project@unimelb.edu.au to ask us for more information.

  • If I participate, what’s in it for me?

    Your participation will help us to refine methods for predicting the replicability of social and behavioural science claims. Any data we collect could drastically change the way we think about published research evidence. For individuals participants, it also provides the opportunity to develop your skills, through peer interactions, and to become more critical consumers of the research literature.

    Our first workshop was held in July 2019 in Rotterdam, with over 200 participants over two days. Our participants reported that they found the experience valuable and enjoyed thinking about replicability of published research evidence. Additionally, early career researchers said participating in the workshop improved their critical appraisal (or peer review) skills, and they enjoyed comparing their judgements against diverse individuals (from discipline to career stage) in their group.

  • How are the 3,000 research claims chosen?

    The Center for Open Science (USA) are selecting the 3,000 research claims, as a subset of a larger set of 30,000 published papers in the social and behavioural sciences that are in scope for the SCORE program. These are:

    • criminology
    • economics
    • education
    • political science
    • psychology
    • public administration
    • marketing, and
    • sociology.

    These claims will be drawn from the following journals.

    Criminology

     

    Marketing/Organisational Behaviour

    • Criminology
    • Law and Human Behavior

     

     

     

     

     

     

    •  Journal of Consumer Research
    • Journal of Marketing
    • Journal of Marketing Research
    • Journal of Organizational Behavior
    • Journal of the Academy of Marketing Science
    • Organizational Behavior and Human Decision Processes
         

    Economics

     

    Political Science

    • American Economic Journal: Applied Economics
    • American Economic Revie
    • Econometrica 
    • Experimental Economics
    • Journal of Finance
    • Journal of Financial Economics
    • Journal of Labor Economics
    • Quarterly Journal of Economics
    • Review of Financial Studies

     

    • American Political Science Review
    • British Journal of Political Science
    • Comparative Political Studies
    • Journal of Conflict Resolution
    • Journal of Experimental Political Science
    • Journal of Political Economy
    • World Development
    • World Politics

     

         

    Education

     

    Psychology

    • American Educational Research Journal
    • Computers and Education
    • Contemporary Educational Psychology
    • Educational Researcher
    • Exceptional Children
    • Journal of Educational Psychology
    • Learning and Instruction

     

     

     

     

     

     

    • Child Development
    • Clinical Psychological Science
    • Cognition
    • European Journal of Personality
    • Evolution and Human Behavior
    • Journal of Applied Psychology
    • Journal of Consulting and Clinical Psychology
    • Journal of Environmental Psychology
    • Journal of Experimental Psychology: General
    • Journal of Experimental Social Psychology
    • Journal of Personality and Social Psychology
    • Psychological Science
         

    Health related

     

    Public Administration

    • Health Psychology
    • Psychological Medicine
    • Social Science and Medicine

     

    • Journal of Public Administration Research and Theory
    • Public Administration Review
         

    Management

     

    Sociology

    • Academy of Management Journal
    • Journal of Business Research
    • Journal of Management
    • Leadership Quarterly
    • Management Science
    • Organization Science

     

    • American Journal of Sociology
    • American Sociological Review
    • Demography
    • European Sociological Review
    • Journal of Marriage and Family
    • Social Forces

  • From which journals are the 3,000 research claims chosen?

    The Center for Open Science (USA) are selecting the 3,000 research claims from the following journals. 

    Criminology

     

    Marketing/Organisational Behaviour

    • Criminology
    • Law and Human Behavior

     

     

     

     

     

     

    •  Journal of Consumer Research
    • Journal of Marketing
    • Journal of Marketing Research
    • Journal of Organizational Behavior
    • Journal of the Academy of Marketing Science
    • Organizational Behavior and Human Decision Processes
         

    Economics

     

    Political Science

    • American Economic Journal: Applied Economics
    • American Economic Revie
    • Econometrica 
    • Experimental Economics
    • Journal of Finance
    • Journal of Financial Economics
    • Journal of Labor Economics
    • Quarterly Journal of Economics
    • Review of Financial Studies

     

    • American Political Science Review
    • British Journal of Political Science
    • Comparative Political Studies
    • Journal of Conflict Resolution
    • Journal of Experimental Political Science
    • Journal of Political Economy
    • World Development
    • World Politics

     

         

    Education

     

    Psychology

    • American Educational Research Journal
    • Computers and Education
    • Contemporary Educational Psychology
    • Educational Researcher
    • Exceptional Children
    • Journal of Educational Psychology
    • Learning and Instruction

     

     

     

     

     

     

    • Child Development
    • Clinical Psychological Science
    • Cognition
    • European Journal of Personality
    • Evolution and Human Behavior
    • Journal of Applied Psychology
    • Journal of Consulting and Clinical Psychology
    • Journal of Environmental Psychology
    • Journal of Experimental Psychology: General
    • Journal of Experimental Social Psychology
    • Journal of Personality and Social Psychology
    • Psychological Science
         

    Health related

     

    Public Administration

    • Health Psychology
    • Psychological Medicine
    • Social Science and Medicine

     

    • Journal of Public Administration Research and Theory
    • Public Administration Review
         

    Management

     

    Sociology

    • Academy of Management Journal
    • Journal of Business Research
    • Journal of Management
    • Leadership Quarterly
    • Management Science
    • Organization Science

     

    • American Journal of Sociology
    • American Sociological Review
    • Demography
    • European Sociological Review
    • Journal of Marriage and Family
    • Social Forces

  • How can I get more information about this project?

    You can express interest in assessing claims, or subscribe to our mailing list. 

    You can also follow us on twitter, @replicats.

    Or, you can send us an e-mail at repliCATS-contact@unimelb.edu.au.

repliCATS_PROJECT_logo_

Access the repliCATS platform

First time users will be asked to create an account which takes about 15-20 minutes to complete. This includes a short form & quiz. We recommend using Google Chrome.

Final repliCATS remote assessment round:
Round 7: 1 – 30 July (midnight AEST)