Another #PeerReview done.

Manuscript c4,000 words
Review c2,700 words
5hrs

Paper in a key area of my methodological work, so it was really interesting. But I really needed to get stuck in.

Two collaboration projects on the design and reporting of #RCTs that might be useful for others:

https://pubmed.ncbi.nlm.nih.gov/37982521/
presents 19 factors to aid trial design, and the DELTA2 Guidance specifying a target difference and reporting the #SampleSize calculation for RCTs
https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-018-2884-0

#StudyDesign

Appropriate design and reporting of superiority, equivalence and non-inferiority clinical trials incorporating a benefit-risk assessment: the BRAINS study including expert workshop - PubMed

Funded by the Medical Research Council UK and the National Institute for Health and Care Research as part of the Medical Research Council-National Institute for Health and Care Research Methodology Research programme.

PubMed

A summary of strengths and limitations of randomized controlled trials and other types of study designs.

https://www.acsh.org/news/2025/12/02/rcts-not-end-all-be-all-49847

#Science #StudyDesign #RCT

RCTs: Not the End All Be All

RCTs are often hailed as the ultimate test of whether an intervention works, yet relying solely on RCTs leaves significant blind spots in science and public health. Ethics, cost, and real-world complexity sometimes make RCTs impossible or uninformative, requiring other methods to fill in the gaps. A complete picture of evidence comes only from weaving together many types of studies.

American Council on Science and Health

This article offers a good discussion of covariate adjustment in cluster randomised trials:
https://pmc.ncbi.nlm.nih.gov/articles/PMC12550654/

There are some good other resources out there, but this one is especially handy due to the breadth of topics covered:
statistical precision, bias from differential recruitment or missing data, how to select covariates, model-based approaches, missing data techniques, and illustration in a case study.

#RCT #Trials #StudyDesign

Covariate adjustment in cluster randomised trials: a practical guide

Covariate adjustment can offer several potential benefits in the analysis of cluster randomised trials. These benefits include increasing statistical precision (ie, narrowing width of confidence intervals), as well as potentially reducing any bias ...

PubMed Central (PMC)
Learn to Identify Epidemiologic Study Designs: Case Series, Cohort, Case-Control & More

Master epidemiology by matching vignettes to the correct study designs. Learn key clues to identify cross-sectional, cohort, case-control, twin, adoption, and ecological studies — ideal for medical students and public health professionals.

mymedschool.org

A review of UK MHRA protocols (k=122) of the use of #Estimand -s offers insights for improvement of protocols due to incomplete or incorrect specifications:
https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-025-08991-8

#RCT #StudyDesign #Registration #Causality

Further resources:

How estimands support stating the exact research question of a study and the interpretation of results:
https://www.bmj.com/content/384/bmj-2023-076316

Application of the estimand framework to for studies with Patient-Reported Outcomes:
https://jpro.springeropen.com/articles/10.1186/s41687-020-00218-5
#HRQOL

Are estimands being correctly used? A review of UK research protocols - Trials

Background The use of estimands in clinical trials was formalised with the adoption of the final International Conference on Harmonisation E9 Addendum on Estimands and Sensitivity Analysis in Clinical Trials (ICH E9(R1) Addendum) in November 2019. The declared objective of the ICH E9(R1) Addendum is to bring clarity and transparency to the research question of interest. For this to be achieved, the estimand must be described in accordance with the requirements of the ICH E9(R1) Addendum so that the target treatment effect is clear to all stakeholders. Previous reviews of publications and published protocols have found that few trials explicitly defined the primary estimand. To obtain a more complete picture of how the use of estimands has changed over time, whether trials are using estimands correctly (i.e. correctly defining the five attributes of an estimand), and which strategies are being used to handle intercurrent events, we obtained access to an extensive database of original research protocols (n = 29,212) submitted to the United Kingdom’s Health Research Authority, which oversees ethical review of clinical trials. Methods Protocols were eligible for review if they included the term ‘estimand’ and attempted to define at least one attribute of the primary estimand. For eligible protocols, we extracted information on trial characteristics such as whether the trial was randomized and the therapeutic area, as well as whether the estimand attributes used for the primary outcome were correctly defined, and which strategies were used to handle intercurrent events. Results We found that the number of protocols defining a primary estimand increased starkly with publication of the ICH E9(R1) Addendum (approximately 3 protocols/year pre-ICH E9(R1) Addendum vs. 18 protocols / year during the consultation period vs. 23 protocols in the year following the adoption of the ICH E9(R1) Addendum). However, the description of the primary estimand was suboptimal; many protocols failed to mention specific attributes (such as population or treatment conditions) in the estimand description, and many protocols incorrectly defined estimand attributes (e.g. by describing the estimand population based on their analysis population). Conclusions Although release of the ICH E9(R1) Addendum has dramatically increased the use of estimands in clinical trials, their reporting is suboptimal. There is still work to be done to ensure estimands reach their full potential in bringing clarity and focus to research questions.

BioMed Central

Which #SampleSize to use in your pilot or feasibility trial?

Well, you won't find the answer in this review of studies in #ISRCTN (2013 to 2020)
https://pilotfeasibilitystudies.biomedcentral.com/articles/10.1186/s40814-023-01416-w

But it is a good intro into the topic, and with 57% not reaching their target sample size, they may interestingly not provide the information they were designed to offer!

#StudyDesign #RCT

A review of sample sizes for UK pilot and feasibility studies on the ISRCTN registry from 2013 to 2020 - Pilot and Feasibility Studies

Background Pilot and feasibility studies provide information to be used when planning a full trial. A sufficient sample size within the pilot/feasibility study is required so this information can be extracted with suitable precision. This work builds upon previous reviews of pilot and feasibility studies to evaluate whether the target sample size aligns with recent recommendations and whether these targets are being reached. Methods A review of the ISRCTN registry was completed using the keywords “pilot” and “feasibility”. The inclusion criteria were UK-based randomised interventional trials that started between 2013 (end of the previous review) and 2020. Target sample size, actual sample size and key design characteristics were extracted. Descriptive statistics were used to present sample sizes overall and by key characteristics. Results In total, 761 studies were included in the review of which 448 (59%) were labelled feasibility studies, 244 (32%) pilot studies and 69 (9%) described as both pilot and feasibility studies. Over all included pilot and feasibility studies (n = 761), the median target sample size was 30 (IQR 20–50). This was consistent when split by those labelled as a pilot or feasibility study. Slightly larger sample sizes (median = 33, IQR 20–50) were shown for those labelled both pilot and feasibility (n = 69). Studies with a continuous outcome (n = 592) had a median target sample size of 30 (IQR 20–43) whereas, in line with recommendations, this was larger for those with binary outcomes (median = 50, IQR 25–81, n = 97). There was no descriptive difference in the target sample size based on funder type. In studies where the achieved sample size was available (n = 301), 173 (57%) did not reach their sample size target; however, the median difference between the target and actual sample sizes was small at just minus four participants (IQR −25–0). Conclusions Target sample sizes for pilot and feasibility studies have remained constant since the last review in 2013. Most studies in the review satisfy the earlier and more lenient recommendations however do not satisfy the most recent largest recommendation. Additionally, most studies did not reach their target sample size meaning the information collected may not be sufficient to estimate the required parameters for future definitive randomised controlled trials.

BioMed Central

The first rule of #PostDoclife is:
Try not to go crazy #worklifebalance #mentalhealth

The second rule of #Postdoclife is:
Do something useful for science #studydesign #deepwork #methodology #scientificsoftwaredevelopment
#hardworkisrare #focusisprecious

The third rule of #Postdoclife is:
Tell others the cool thing you did #writepapers #outreach #presentations

Go to rule 1.

#AcademicChatter #Astrodon

Thank you to #HealthPsychology and Behavioral Medicine @unibern for writing a post about the recent Summer Course 2025, “Contemporary Evaluation of Interventions: Mobile, Digital, and Pragmatic”
https://www.linkedin.com/posts/gesundheitspsychologie-und-verhaltensmedizin-gpv-unibe_gpv-unibe-mhealth-activity-7340302148112408577-4_JW/

#StudyDesign #RCT #HiddenCurriculum #ECRs

Our PhD students — Carole Rüttimann, Melanie Bamert, and Robert Edgren — recently participated in the summer course “Contemporary Evaluation of Interventions: Mobile, Digital, and Pragmatic” hosted… | Health Psychology and Behavioral Medicine (GPV UniBe)

Our PhD students — Carole Rüttimann, Melanie Bamert, and Robert Edgren — recently participated in the summer course “Contemporary Evaluation of Interventions: Mobile, Digital, and Pragmatic” hosted by the Doctoral Program in Brain and Behavioral Sciences at University of Bern. This course offered an excellent opportunity to deepen their understanding of digital intervention design and evaluation. Key takeaways:  💡 3 key principles of app-based interventions and how to apply them: bridging theory and practice, digital intervention design, scientific rigor in evaluation  💡Designing the control condition of a trial requires consideration of several factors. For example, baseline support may already be substantial, with additional treatment components potentially adding only minimal benefits. Moreover, this course made it possible to exchange insights with fellow early-career researchers from across disciplines and learn from senior researchers’ experiences during a Career Insight Panel discussion. Thank you for sharing your valuable lessons learned, Maria Stein, Jan Rasmus Böhnke, Regula Neuenschwander, and Mirjam Stieger. We’re excited to see how these learnings will shape our PhD students’ ongoing research! And a big thanks goes out to Madeleine Haenggli, Svenja Staubli, and Nicole Ruffieux for organizing this insightful and interactive course!    #GPV #Unibe  #mHealth #Apps #Gesundheit #PhDLife Phil.-hum. Fakultät der Universität Bern

A well-justified epistemological foundation is a key requirement for a #DelphiStudy:
https://www.sciencedirect.com/science/article/pii/S1865921722000769

The team argues that #ReportingGuidelines encourage researchers to reflect on and disclose key assumptions and methodological choices in such studies.

#StudyDesign #Consensus

Co-developed at #DundeeUni and #SFU
"A Toolkit for Building a Collective of Older Adult Researchers"
https://www.sfu.ca/starinstitute/about/institute-activities/research-project--coar.html
(COAR)

"A practical resource for engaging older adults as active partners in community-based research"

#Participatory #StudyDesign

Research Project: COAR