Hey PsyPAG2020 folks 👋🏻 have a great workshop!
If you fancy leaving us some feedback, we'd love to hear from you :) You can e-mail repliCATS-contact@unimelb.edu.au or leave a reply using the reply function below (anonymously if you'd prefer).
If you fancy leaving us some feedback, we'd love to hear from you :) You can e-mail repliCATS-contact@unimelb.edu.au or leave a reply using the reply function below (anonymously if you'd prefer).
Earlier this month, Fiona Fidler gave a talk about the repliCATS project, and some of our plans for the future of the project. If you missed it (including if you were asleep because it ran from 11am-12pm BST), you can re-watch it via this link (YouTube)!
The repliCATS project evaluates published scientific research. As the acronym—Collaborative Assessments for Trustworthy Science—suggests, repliCATS is a group activity, centred around assessing the trustworthiness of research claims. Reviewers first make private individual assessments about a research claim—judging its comprehensibility, the prior plausibility of underlying effect, and its likely replicability. Reviewers then share their judgements and reasoning with group members, providing both new information and the opportunity for feedback and calibration. The group interrogates differences in opinion and explores counterfactuals. After discussion, there is a final opportunity for privately updating individual judgements. Importantly the repliCATS process is not consensus-driven – reviewers can disagree, and their ratings and probability judgements are mathematically aggregated into a final assessment. At the moment, the repliCATS platform exists primarily to predict replicability. Launched in January 2019 as part of the DARPA SCORE program, over 18 months repliCATS elicited group assessments and captured associated reasoning and discussion, for 3,000 published social scientific research claims in 8 disciplines (Business, Criminology, Economics, Education, Political Science, Psychology, Public Administration, and Sociology). The repliCATS team are now working to extend the platform beyond merely predicting replicability, to deliver a more comprehensive peer review protocol. Suspected advantages of a repliCATS process over traditional peer review include: inbuilt training and calibration; feedback that is intrinsically rewarding; an inherently interactive process, but one which does not implicitly rely on ‘consensus by fatigue’; and a process that actively encourages interrogation. This talk will present some preliminary findings, and discuss the future of the platform.
The @replicats project is phenomenal! I participated last week by assessing replicability of meta-analysis claims. Was fascinating to see how others reach judgements and learnt about interesting synthesis methods which I hadn't come across before https://t.co/SwJU9cSNDX
— Matthew Page (@mjpages) July 28, 2020
me, trying to replicate the in-person @replicats experience:
(sidenote: team box jelly #4 🏅🏆 wooo!!! congratulations, everyone! 🥳)#repliCATS2020 pic.twitter.com/Q44mWqL3sx
— james (he/mx.) 🏳️🌈🏳️⚧️ (@lysanderjames) May 19, 2020
#repliCATS2020 is over, but it was the most fun remote event ever I participated in so far. The award ceremony was the absolute highlight! 🥳As you can guess from the number of my claims, I already took part in the @replicats workshop at SIPS 2019 and I want to briefly review. pic.twitter.com/4NF0rg0s7v
— Julia Beitner (@JuliaBeitner) May 19, 2020
Happy Sundays with more claims @replicats #repliCATS2020 @DARPA #science pic.twitter.com/WUUCIi3G7P
— Suzan Dilara Tokac (@sdtokac) May 17, 2020
One of the things that struck me from @replicats, is the diversity in cues people used for their decisions, and the (seeming) consist individual differences in this - often reaching the same conclusion through different routes #SIPS2019
— Ben Farrar (@bg_farrar) July 6, 2019
Super excited to have been a part of the @replicats pre-SIPS workshop! A deeply cool project in which I feel I benefitted a good deal from participating. pic.twitter.com/SDP04x3IyC
— Grace Binion (@beercatphd) July 6, 2019
What did I learn at #SIPS2019 @replicats today? Assessing claims is anything but easy, super interesting how the strategies between people in one group can differ, and it's amazing to benefit from the group knowledge and be able to update your evaluation after discussion.
— Julia Beitner (@JuliaBeitner) July 5, 2019
Great to meet some of the @replicats team, learn more about their international replication project & practice critical thinking & peer review skills. Thanks @HDR_HeSSA for organising! pic.twitter.com/SztDNfUjLy
— Aida Brydon (@aida_brydon) September 5, 2019
How exciting to be able to participate in this landmark study! 3000+ published research claims being assessed independently by ~5 experts and group discussion. @replicats pic.twitter.com/F6Cg7Pevn6
— Renee Otmar, PhD DE (@renee8otmar) November 6, 2019
The @replicats training materials is not just for replicats. Great introduction to essential statistical and meta-research concepts, with videos from @lakens https://t.co/M9IQQBcyzM
— Katie Drax (@katiedrax) July 5, 2019
And that’s a wrap! 20+ claims assessed for replicability in 1.5 days. Special thanks to @MartinJBush @fidlerfm @v_hemming and the rest of the @replicats team for making this a unique and enjoyable experience! pic.twitter.com/rS2ex0VajV
— Lina Koppel (@linakoppel) July 6, 2019
Had the best time facilitating for #repliCATS — super interesting and fun to assess claims with my awesome group! https://t.co/qq6Yadrduo
— Sophia Crüwell (@cruwelli) July 6, 2019
And here's proof that we had TimTams, although I was so excited about the claims that I forgot to take proper pictures of that key element of the @replicats workshop 😂😂 pic.twitter.com/MWquvxNmtI
— Singapore ReproducibiliTea (@SingReproTea) December 17, 2019
Yesterday we evaluated 200 published papers from various disciplines (political science, education, law,etc.) to judge their replicability. Science is evolving and its amazing to be a part of it! Thank you @replicats #repliCATs #score #darpa #universitymelbourne #idea #aimos pic.twitter.com/45YIlFXKAD
— Suzan Dilara Toksc (@sdtokac) November 7, 2019
It is the final remote round for repliCATS in phase 1! So, we'll end how we started, by giving prizes to the three participants who assess the most number of total claims in July 2020, as follows:
US$500 - for the participant who assessed the most number of claims
US$250 - for the next two highest participants.
In this round we'll calculate how many business or economics** claims people complete between 1 June - 30 June 2020 (midnight, AEST)
*A completed claim counts as submitting a round 2 assessment.
** Claims are considered Business (marketing) or Economics claims if they were from the journals specified the table in FAQs under "From which journals are the 3000 claims being chosen?"
--
Follow these links for:
*NB: each month the prizes might change.
In our first month of launching the remote group platform, we'll give prizes to the three participants who assess the most number of total claims in the period 9 Dec 2019 - 9 Jan 2020, as follows:
In this round from 17 Jan - 17 Feb 2020 we'll give prizes to the three participants who provide the most number of unique reasons as follows:
In this round from 18 Feb - 17 22 Mar 2020 (midnight, AEST) we'll put all participants who complete at least 10 claims into a pool, and randomly draw four winners from that pool.
In this round from 23 Mar 2020 - 22 April (midnight, AEST) we'll put all participants who complete at least 10 claims into a pool, and randomly draw four winners from that pool.
In this round from 23 Apr 2020 - 22 May (midnight, AEST) we'll put all participants who complete at least 10 claims into a pool, and randomly draw up to four winners from that pool.
Our last phase 1 workshop was originally scheduled to run before the 2020 Society for Improving Psychological Sciences (SIPS) conference in Victoria, Canada. Then everything changed.
The repliCATS team created our first virtual workshop designed to accomodate all the original workshop participants scattered around over 10 timezones. What we ended up with was repliCATS2020 workshop week that ran from 12-19 May, where over 150 participants and facilitators contributed 2238 individual judgements, or the equivalent of 560 claims!
That was more than what we completed at the last SIPS face-to-face workshop in 2019.
Of course, that didn't stop participants re-creating their favourite bits of our face-to-face workshops.
The repliCATS team had a few surprises for participants though: we reprised our very popular closing ceremony and awards virtually, including a musical finale that we cannot reproduce here due to copyright issues :D. You'll just have to take Julia and Dax's words for it.
The repliCATS team have learned a lot from running this first virtual workshop, which required some significant adjustments compared to face-to-face equivalents. However, participant feedback has left us feeling that we responded successfully, and have a platform upon which we can run future workshops virtually without compromising on participant engagement. Of course some of this success is thanks to our participants themselves who engaged in this new format enthusiastically and with an open mind. So, all that is left to report is a big thank you to the SIPS community. We're humbled by the contribution this community has made across two huge face-to-face workshops in 2019 and 2020, contributing to 1100 of the 3000 claims we're aiming to assess!
While this might have been the last big workshop the repliCATS team will run in phase 1 of the SCORE program, we have a little bit left to go. As on 22 June, we have assessments in over 2775 of the 3000 claims. We are hoping to complete assessments by early August 2020, and the repliCATS remote platform will remain live and operational for a bit longer.
Hi repliCATS participants, some routine system maintenance being undertaken on the repliCATS platform in the afternoon on 13 May (Australian time) resulted in our database being wiped from the live platform.
We have identified the problem, and restored the platform shortly after, but we have irretrievably lost all of the assessments you submitted on the platform on 12 May and the first part of 13 May (Australian time). Your private information and demographic information is stored separately and has not been compromised in any way, but we’ve lost all any assessments submitted for a 24-36 hour period.
We're so sorry because we appreciate the work that goes into each assessment you make on the platform. We have never experienced a technical issue like this before and have no reason to expect a repeat occurrence.
You can re-do/re-submit assessments for the claims that you already assessed, if only if to re-enter the quantitative judgements for these lost claims.
If you have any questions or would like to discuss this further, please contact us: repliCATS-contact@unimelb.edu.au and we will get back to you as soon as possible.
**Update#2 in light of covid-19 (25 Mar) - These workshops on Thursday 26 March & Monday 30 March have been postponed indefinitely.
This will probably not come as a surprise to you in light of the continually evolving response to the COVID-19 pandemic, and closer to home, how university staff are needing to rapidly respond to teaching & research commitments in a remote environment.
We are currently planning how we can support our participants in a fully virtual world, and we would like your thoughts and feedback on this. Please e-mail, repliCATS-project@unimelb.edu.au **
--
**Update in light of covid-19 (13 Mar) - we've been advised that we should not hold these workshops as face-to-face events. Therefore, we will now run two virtual workshops on 26 March and 30 March starting at 10am AEST.**
--
We're running two virtual repliCATS workshops on:
All workshops will be livestreamed from 10am - 11.30am.
For more info & to register via eventbrite: https://www.eventbrite.com.au/e/replicats-workshop-month-aus-tickets-94336327495
You should attend if you are from/familiar with one of these disciplines listed above, and are:
We aim to estimate the replicability of published research claims in the social and behavioural science. Following an introduction to the replication crisis, and our research project, you will be able to read and evaluate published research claims in one of the following fields: Business research, Criminology, Economics, Education, Political Science, Psychology, Public Administration, and Sociology.
Workshops are fun, and you learn lots about research methods, critical thinking, and effective peer-review! Follow the convo @repliCATS #repliCATS.
https://twitter.com/cantabile/status/1191959379404349442?s=20
Read about other workshops we've run, including at SIPS2019 and AIMOS2019.
You may have noticed that the badges you earn are mostly a good-bad combination of cat puns? Well, we had a lot of fun coming up with them. But the team are stumped/divided about when we should award ALLEY CAT.
All suggestions welcome (anonymously) here:
Date: 20 June 2020
Time: All-day
Venue: In downtown Victoria (Canada)
We have 150 spots available, and up to 100 travel grants** worth US$550 each for participants travelling to the workshop (i.e. participants who live outside the immediate area).
For more details, and to apply for the repliCATS workshop: https://forms.gle/JhyHLmYyyg9uFxE4A (via Google form)
NB. If you intend to attend SIPS2020 (21-23 June) you will have to register separately.
**03 Mar 2020: please note that we have offered 100 travel grants, and all applications for grants are now on a waitlist.
--
For more information, you can contact us on repliCATS-project@unimelb.edu.au or DM us on twitter, @replicats
Number of posts found: 38