Jack Wilkinson

@jd_wilko
387 Followers
265 Following
33 Posts
Stats, data, epi, methods. Infertility. Detecting fabricated clinical research. Centre for Biostats, University of Manchester.
The problem:
This is the problem we are trying to address in INSPECT-SR:
We’re assembling a long list of methods for detecting ‘problematic’ studies (roughly, this means they are untrustworthy due to serious research integrity issues). If you have experience in this area, and would be willing to review the list and tell us if we are missing any, get in touch! #researchintegrity #peerreview

@jd_wilko
41 studies collected here (and you can break your students' hearts with what a mess they all are)

Randomized Clinical Trials of Machine Learning Interventions in Health Care

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2796833

Randomized Clinical Trials of Machine Learning Interventions in Health Care

This systematic review examines the design, reporting standards, risk of bias, and inclusivity of randomized clinical trials of machine learning interventions in health care.

Do you run or actively contribute to a reproducibility or open science initiative?

Join our virtual brainstorming event on "How to build, grow and sustain a reproducibility or open science initiative" (Nov. 22-23) to share your experiences. We'll discuss challenges and opportunities, explore solutions, and compile lessons learned to share with the research community.

All are welcome, including early career researchers from around the world.

Info & registration: https://www.bihealth.org/en/notices/how-to-build-grow-and-sustain-reproducibility-or-open-science-initiatives-a-virtual-brainstorming-event

How to build, grow, and sustain reproducibility or open science initiatives: A virtual brainstorming event - News - BIH at Charité

This virtual brainstorming event from November 22-23, 2022 will bring together individuals in Germany who are passionate about reproducibility and open science, including members of the German Reproducibility Network. Participants will share experiences and strategies on how to build, grow, and sustain initiatives that focus on reproducibility and open science. We welcome participants who are just starting initiatives or are interested in starting initiatives, along with those who are participating in or leading established initiatives. We will explore techniques for starting new initiatives, as well as strategies for expanding existing initiatives and making an initiative sustainable.

Berliner Institut für Gesundheitsforschung - Charité und Max-Delbrück-Centrum

Even getting 2 people to screen search results can mean something like 3% of eligible studies are missed: add another 10% if it's a single reviewer.

This systematic review found 3 meta-research studies about the characteristics and/or recovery methods for those studies that get missed....

https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-022-02109-w

#MedLibs #SystematicReviews #MetaScience #MetaResearch

Characteristics and recovery methods of studies falsely excluded during literature screening—a systematic review - Systematic Reviews

Background Due to the growing need to provide evidence syntheses under time constraints, researchers have begun focusing on the exploration of rapid review methods, which often employ single-reviewer literature screening. However, single-reviewer screening misses, on average, 13% of relevant studies, compared to 3% with dual-reviewer screening. Little guidance exists regarding methods to recover studies falsely excluded during literature screening. Likewise, it is unclear whether specific study characteristics can predict an increased risk of false exclusion. This systematic review aimed to identify supplementary search methods that can be used to recover studies falsely excluded during literature screening. Moreover, it strove to identify study-level predictors that indicate an elevated risk of false exclusions of studies during literature screening. Methods We performed literature searches for eligible studies in MEDLINE, Science Citation Index Expanded, Social Sciences Citation Index, Current Contents Connect, Embase, Epistemonikos.org, and Information Science & Technology Abstracts from 1999 to June 23, 2020. We searched for gray literature, checked reference lists, and conducted hand searches in two relevant journals and similar article searches current to January 28, 2021. Two investigators independently screened the literature; one investigator performed the data extraction, and a second investigator checked for correctness and completeness. Two reviewers assessed the risk of bias of eligible studies. We synthesized the results narratively. Results Three method studies, two with a case-study design and one with a case-series design, met the inclusion criteria. One study reported that all falsely excluded publications (8%) could be recovered through reference list checking compared to other supplementary search methods. No included methods study analyzed the impact of recovered studies on conclusions or meta-analyses. Two studies reported that up to 8% of studies were falsely excluded due to uninformative titles and abstracts, and one study showed that 11% of non-English studies were falsely excluded. Conclusions Due to the limited evidence based on two case studies and one case series, we can draw no firm conclusion about the most reliable and most valid method to recover studies falsely excluded during literature screening or about the characteristics that might predict a higher risk of false exclusion. Systematic review registration https://osf.io/v2pjr/

BioMed Central
Looking for examples of RCTs of AI algorithms (or similar). Trying to trick health data scientists into finding my RCT course interesting. Please send! #machinelearning #AI #datascience
I am currently leading an NIHR-funded project to develop a tool for identifying ’problematic’ studies in systematic reviews. ‘Problematic’ = those subject to data fabrication/falsification, or other serious research integrity issues. This does not mean poor methodology. Interested in connecting with people with interest, experience, or expertise in this area. #researchintegrity #researchintegrityandpeerreview
Am now regularly requesting the underlying dataset as a statistical peer reviewer. Guess what? It never matches the paper!
Spent a couple of hours trying to get a multipanel figure looking right in R. Failed, and ended up putting it together in PowerPoint. Looks pretty good!