Maarten van Smeden

@MaartenvSmeden
2.4K Followers
155 Following
44 Posts
Medical statistician and epidemiologist • associate professor • head of methodology research program at Julius Center for Health Sciences and Primary Care, UMC Utrecht • own views

Does anyone know of a funder of clinical trials that has some form of open review? Or has openness about the applicant’s response, committee deliberations and funding decision to the peer reviewers?

I find it very important to do my part as a statistical reviewer of clinical trials, but my review for the Dutch funder ZonMW just disappeared in a black hole.

Please boost this question!

RT @MaartenvSmeden
Great new read from @stephensenn for any trialist, observationalist or casualist with an interest in analyzing data that are nested (or clustered, multilevel,...)
https://doi.org/10.1007/s10654-022-00941-x
Student and the Lanarkshire milk experiment - European Journal of Epidemiology

A detailed examination of the 1930 Lanarkshire Milk Experiment (LME) by the famous statistician William Sealy Gossett (“Student”), which appeared in Biometrika in 1931, is re-examined from a more modern perspective. The LME had a complicated design whereby 67 schools in Lanarkshire were allocated to receive either raw or pasteurised milk but pupils within the schools were allocated to either receive milk or to act as controls. Student’s criticisms are considered in detail and examined in terms of subsequent developments on the design and analysis of experiments, in particular as regards appropriate estimation of standard errors of treatment estimates when an incomplete blocks structure has been used. An analogy with a more modern trial in osteoarthritis is made. Suggestions are made as to how analysis might proceed if the original data were available. Some lessons for observational studies in epidemiology are drawn and it is speculated that hidden clustering structures might be an explanation as to why results may vary from observational study to observational study by more than conventionally calculated standard errors might suggest.

SpringerLink

NEW PAPER
Overview of performance measures for time to event prediction models. Comes with elaborate R and SAS code

Journal version (paywall) 👉 https://doi.org/10.7326/M22-0844
Preprint 👉 https://doi.org/10.1101/2022.03.17.22272411

Assessing Performance and Clinical Usefulness in Prediction Models With Survival Outcomes: Practical Guidance for Cox Proportional Hazards Models | Annals of Internal Medicine

Risk prediction models need thorough validation to assess their performance. Validation of models for survival outcomes poses challenges due to the censoring of observations and the varying time horizon at which predictions can be made. This article describes measures to evaluate predictions and the potential improvement in decision making from survival models based on Cox proportional hazards regression. As a motivating case study, the authors consider the prediction of the composite outcome of recurrence or death (the “event”) in patients with breast cancer after surgery. They developed a simple Cox regression model with 3 predictors, as in the Nottingham Prognostic Index, in 2982 women (1275 events over 5 years of follow-up) and externally validated this model in 686 women (285 events over 5 years). Improvement in performance was assessed after the addition of progesterone receptor as a prognostic biomarker. The model predictions can be evaluated across the full range of observed follow-up times or for the event occurring by the end of a fixed time horizon of interest. The authors first discuss recommended statistical measures that evaluate model performance in terms of discrimination, calibration, or overall performance. Further, they evaluate the potential clinical utility of the model to support clinical decision making according to a net benefit measure. They provide SAS and R code to illustrate internal and external validation. The authors recommend the proposed set of performance measures for transparent reporting of the validity of predictions from survival models.

Annals of Internal Medicine
A little tradition that forces me to rethink which methods papers I liked most in the last year, appearing on the bird site:
https://twitter.com/MaartenvSmeden/status/1606954677295349760?s=20&t=VS7XJG5J230iMhM0BiK8yA
Maarten van Smeden on Twitter

“This is my *top 10* favorite methods papers of 2022 Appearing in a single thread and in random order”

Twitter
Prediction modelers be like: fairness of my model is likely not a problem because it scores pretty high on discrimination performance
Just received another review invitation. But this marks the first time I really asked myself: is this you ChatGPT?
I was once in this magical place where scientists were as critical on their measurements as they are on h-indices and p-values. And then I woke up

ICYMI:

Out of balance

An essay on covariate adjustment in randomized controlled trials in medicine.

https://statsepi.substack.com/p/out-of-balance

Out of balance

A perspective on covariate adjustment in randomized controlled trials in medicine.

Life is pain, especially your data

Oh good! This study on preprints vs journal versions has a systematic group:19 preprint/journal pairs from a systematic review on Covid prediction models.

Reporting quality of preprints: lousy.

After peer review: trivially less lousy.

Hudda et al (incl @MaartenvSmeden)
https://www.jclinepi.com/article/S0895-4356(22)00323-7/fulltext

RT @MaartenvSmeden
Can someone help me with the correct interpretation of AUC = 0.86? I cannot decide between "bordering on excellence" or "trending towards magnificence"