Robbie Clark

37 Followers
52 Following
30 Posts

Lecturer in the School of Psychological Science at the University of Bristol.

Exploring all things Philosophical/ Meta/ Qualitative/ Open in science.

🏳️‍🌈 he/him

Happily migrating from: https://twitter.com/RobbieC_Bristol

The End of History and the Last Man

Francis Fukuyama’s book is over 30 years old, but the argument he makes has never been more relevant.

https://tomstafford.substack.com/p/the-end-of-history-and-the-last-man

latest newsletter from me

The End of History and the Last Man

Francis Fukuyama’s book is over 30 years old, but the argument he makes has never been more relevant.

Reasonable People

Stop saying “artificial intelligence”. (And “neural networks” too.)

Be more specific. Say “reinforcement learning”. Say “generative modelling”. Say “Bayesian filtering”. Say “statistical prediction”.

These are incredibly useful tools that have nothing to do with “intelligence”.

And say “model trained on plagiarised data”.

Say “bullshit generator”.

Say “internet regurgitator”.

These are also nothing to do with intelligence, but they have the added bonus of being useless, too.

💡 Last week, CWTS welcomed a delegation of @RoRInstitute (RoRI) core-partners to Leiden for a horizon-scanning workshop, focused on shaping the future of the CWTS-RoRI partnership.
This event was all about looking to the future to spot novel opportunities and synergies for the next wave of meta-research!

🔗 Find out more about RoRI here 👉 https://researchonresearch.org

Home - Research on Research

We’re transforming research systems and cultures. Ensuring that we have the evidence we need to realise the full potential of research.

Research on Research

When the coach reached Glasgow, there was morning rush hour traffic on the M80. I have not been in a traffic jam for years.

I stared out from the coach window at all the individuals in their individual cars, contending with each other, struggling for forward movement. How is it possible that all those people who _could_ work together to solve such a simple problem, to create a collective, joyous, efficient way to travel... how have they been persuaded instead to see each other as obstacles and competitors? Is it really worth losing your humanity to have an expensive, dirty, metal suit? Who did this to us?

#waroncars

The enemies within: How the pandemic radicalised Britain

https://www.sheffieldtribune.co.uk/p/the-enemies-within-how-the-pandemic

"The riots have been blamed on everything from the economy to Elon Musk. But the networks that mobilised violence on our streets were forged in opposition to vaccines and lockdowns"

The enemies within: How the pandemic radicalised Britain

The riots have been blamed on everything from the economy to Elon Musk. But the networks that mobilised violence on our streets were forged in opposition to vaccines and lockdowns

Sheffield Tribune
how researchers use (and misuse) G*power
Important preprint by Thibault et al
https://www.medrxiv.org/content/10.1101/2024.07.15.24310458v1
An evaluation of reproducibility and errors in published sample size calculations performed using G*Power

Background Published studies in the life and health sciences often employ sample sizes that are too small to detect realistic effect sizes. This shortcoming increases the rate of false positives and false negatives, giving rise to a potentially misleading scientific record. To address this shortcoming, many researchers now use point-and-click software to run sample size calculations. Objective We aimed to (1) estimate how many published articles report using the G*Power sample size calculation software; (2) assess whether these calculations are reproducible and (3) error-free; and (4) assess how often these calculations use G*Power’s default option for mixed-design ANOVAs—which can be misleading and output sample sizes that are too small for a researcher’s intended purpose. Method We randomly sampled open access articles from PubMed Central published between 2017 and 2022 and used a coding form to manually assess 95 sample size calculations for reproducibility and errors. Results We estimate that more than 48,000 articles published between 2017 and 2022 and indexed in PubMed Central or PubMed report using G*Power (i.e., 0.65% [95% CI: 0.62% - 0.67%] of articles). We could reproduce 2% (2/95) of the sample size calculations without making any assumptions, and likely reproduce another 28% (27/95) after making assumptions. Many calculations were not reported transparently enough to assess whether an error was present (75%; 71/95) or whether the sample size calculation was for a statistical test that appeared in the results section of the publication (48%; 46/95). Few articles that performed a calculation for a mixed-design ANOVA unambiguously selected the non-default option (8%; 3/36). Conclusion Published sample size calculations that use G*Power are not transparently reported and may not be well-informed. Given the popularity of software packages like G*Power, they present an intervention point to increase the prevalence of informative sample size calculations. ### Competing Interest Statement The authors have declared no competing interest. ### Clinical Protocols <https://doi.org/10.17605/OSF.IO/UJXHW> ### Funding Statement Robert Thibault was supported by a general support grant awarded to METRICS from Arnold Ventures and a postdoctoral fellowship from the Canadian Institutes of Health Research. Robert Thibault will serve as guarantor for the contents of this paper. Hugo Pedder was supported by the UK National Institute for Health and Social Care Excellence (NICE) via the Bristol Technology Assessment Group and the NICE Technical Support Unit. The funders had no role in the preparation of this manuscript or the decision to publish. ### Author Declarations I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained. Yes I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals. Yes I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance). Yes I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable. Yes Data, data dictionaries, analysis scripts, and other materials related to this study are publicly available at <https://osf.io/msz24/>. The study protocol was registered on 31 May 2022 at <https://doi.org/10.17605/OSF.IO/UJXHW>. Discrepancies between this manuscript and the registered protocol are outlined in [Supplementary Material A][1]. The analysis script can be rerun by selecting “Reproducible Run” in the Code Ocean container available at <https://doi.org/10.24433/CO.4349082.v1>. [1]: #sec-29

medRxiv

 My PhD has finally been validated and published on University of Bristol's repository

LINK: https://lnkd.in/drzfJS3U

"Understanding the Role and Utility of Philosophy of Science in Psychology and Beyond"

Huge thanks to the countless people who helped along the way - especially @fidlerfm and @stworg for their great feedback and discussion during the viva - and obviously my supervisors Marcus Munafo and James Ladyman.

LinkedIn

This link will take you to a page that’s not on LinkedIn

The UK Reproducibility Network (#UKRN) has now published the results of a survey which explores the UK landscape of responsible researcher assessment, with a particular focus on open research!

https://www.ukrn.org/2024/03/12/ukrn-releases-a-new-working-paper-on-the-or4-project/

#openscience #openscholarship #academia #ResponsibleResearch #ResearchIntegrity #Research #DORA #CoARA

UKRN releases a new working paper on the OR4 Project | UK Reproducibility Network

"This means that for every $1,000 that the academic community spends on publishing in Elsevier, about $400 go into the pockets of its shareholders." https://english.elpais.com/science-tech/2023-11-21/scientists-paid-large-publishers-over-1-billion-in-four-years-to-have-their-studies-published-with-open-access.html
Scientists paid large publishers over $1 billion in four years to have their studies published with open access

A study reveals that academic megajournals ‘Scientific Reports’ and ‘Nature Communications’ have cornered the market

EL PAÍS English

"If only there were evil people somewhere committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?"

Aleksandr Solzhenitsyn in The Gulag Archipelago