A SCORE of pre-prints.
In phase 1, we crowdsourced judgements of replicability for 3000 research claims drawn from peer-reviewed journals across eight social and behavioural science disciplines. The following thread unrolls a series of pre-prints now available describing the overall SCORE collaborative in phase 1 (involving five research teams from around the world), as well as a number of articles from the repliCATS project team, including:
- A deeper dive into phase 1 of the repliCATS project.
Fraser et al., “Predicting reliability through structured expert elicitation with repliCATS (Collaborative Assessments for Trustworthy Science)” - A paper describing the reproducible data pipelines our team developed.
Gould et al., “aggreCAT: An R Package for Mathematically Aggregating Expert Judgments” - A description of our 22 mathematical aggregation methods we use to aggregate group predictions.
Hanea et al., “Mathematically aggregating experts’ predictions of possible futures” - How our platform was developed to operationalize the IDEA protocol.
Pearson et al (2021), “Eliciting group judgements about replicability: a technical implementation of the IDEA Protocol” - And finally, the results and reasoning of a small study of 25 claims with a pre-existing replication outcome.
Wintle et al., “Predicting and reasoning about replicability using structured groups”
The repliCATS team have a #SCORE of preprints that we have published or contributed to recently, starting with this one that describes the overall SCORE program approach and goals⤵️ https://t.co/vjLPyX3y4w
— repliCATS_project (@replicats) May 5, 2021
And finally, this preprint "aggreCAT: An R Package for Mathematically Aggregating Expert Judgments" describes part of the amazing data pipeline our team developed to calculate 22 aggregation methods for 3000 claims! https://t.co/RDATgNSca7
— repliCATS_project (@replicats) May 5, 2021