#statstab #470 Low power bias {shiny} app

Thoughts: easily show what conducting an underpowered study would do to your effect size (type M error).

#teaching #bias #power #typeM #typeS #QRPs #underpowered #samplesize

https://c-jaksic.shinyapps.io/small_power_bias/

Low power bias

Another #PeerReview done.

Manuscript c4,000 words
Review c2,700 words
5hrs

Paper in a key area of my methodological work, so it was really interesting. But I really needed to get stuck in.

Two collaboration projects on the design and reporting of #RCTs that might be useful for others:

https://pubmed.ncbi.nlm.nih.gov/37982521/
presents 19 factors to aid trial design, and the DELTA2 Guidance specifying a target difference and reporting the #SampleSize calculation for RCTs
https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-018-2884-0

#StudyDesign

Appropriate design and reporting of superiority, equivalence and non-inferiority clinical trials incorporating a benefit-risk assessment: the BRAINS study including expert workshop - PubMed

Funded by the Medical Research Council UK and the National Institute for Health and Care Research as part of the Medical Research Council-National Institute for Health and Care Research Methodology Research programme.

PubMed

#statstab #448 {metaforest} Small sample meta-analysis

Thoughts: "a machine-learning based, exploratory approach to identify relevant moderators in meta-analysis"

#ML #MachineLearning #metaanalysis #smallsample #samplesize #heterogeneity #moderator

https://cjvanlissa.github.io/metaforest/articles/Introduction_to_metaforest.html

Introduction to metaforest

#statstab #448 {metaforest} Small sample meta-analysis

Thoughts: "a machine-learning based, exploratory approach to identify relevant moderators in meta-analysis"

#ML #MachineLearning #metaanalysis #smallsample #samplesize #heterogeneity #moderator

https://cjvanlissa.github.io/metaforest/articles/Introduction_to_metaforest.html

Introduction to metaforest

#samplesize and #ethics question: You plan for a study needing n=100 (50 per cell). Power analysis is all set up and pre-reg. But, because you do research in a uni, you are told you need to allow for more participants (students) as there is a set number of credits all need to reach. What do you do?

#statstab #440 Computing Statistical Power for the Difference in Differences Design

Thoughts: DiD studies are all the rage in Obs research. But how does the concept of power apply to them?

#poweranalysis #DiD #causalinference #samplesize #observational

https://journals.sagepub.com/doi/10.1177/0193841X251380898

#statstab #426 Execution of Replications

Thoughts: A good resource for conducting replications. Different ways to plan your sample size and consider "success/failure".

#replication #OpenScience #metaanalysis #samplesize #sesoi #smalltelescope

https://forrt.org/replication_handbook/execution_replications.html

6  Execution of Replications โ€“ Handbook for Reproduction and Replication Studies

How to carry out reproductions and replications in the social, cognitive, and behavioral sciences

#statstab #421 Sample Size Planning for Statistical Power and Accuracy in Parameter Estimation

Thoughts: AIPE is based on controlling the width of the CI.
Sample size can be computed independent of effect size!

#samplesize #confidenceintervals #AIPE #power #poweranalysis #precision #accuracy #research #design

https://www.annualreviews.org/content/journals/10.1146/annurev.psych.59.103006.093735

Sample Size Planning for Statistical Power and Accuracy in Parameter Estimation | Annual Reviews

This review examines recent advances in sample size planning, not only from the perspective of an individual researcher, but also with regard to the goal of developing cumulative knowledge. Psychologists have traditionally thought of sample size planning in terms of power analysis. Although we review recent advances in power analysis, our main focus is the desirability of achieving accurate parameter estimates, either instead of or in addition to obtaining sufficient power. Accuracy in parameter estimation (AIPE) has taken on increasing importance in light of recent emphasis on effect size estimation and formation of confidence intervals. The review provides an overview of the logic behind sample size planning for AIPE and summarizes recent advances in implementing this approach in designs commonly used in psychological research.

Which #SampleSize to use in your pilot or feasibility trial?

Well, you won't find the answer in this review of studies in #ISRCTN (2013 to 2020)
https://pilotfeasibilitystudies.biomedcentral.com/articles/10.1186/s40814-023-01416-w

But it is a good intro into the topic, and with 57% not reaching their target sample size, they may interestingly not provide the information they were designed to offer!

#StudyDesign #RCT

A review of sample sizes for UK pilot and feasibility studies on the ISRCTN registry from 2013 to 2020 - Pilot and Feasibility Studies

Background Pilot and feasibility studies provide information to be used when planning a full trial. A sufficient sample size within the pilot/feasibility study is required so this information can be extracted with suitable precision. This work builds upon previous reviews of pilot and feasibility studies to evaluate whether the target sample size aligns with recent recommendations and whether these targets are being reached. Methods A review of the ISRCTN registry was completed using the keywords โ€œpilotโ€ and โ€œfeasibilityโ€. The inclusion criteria were UK-based randomised interventional trials that started between 2013 (end of the previous review) and 2020. Target sample size, actual sample size and key design characteristics were extracted. Descriptive statistics were used to present sample sizes overall and by key characteristics. Results In total, 761 studies were included in the review of which 448 (59%) were labelled feasibility studies, 244 (32%) pilot studies and 69 (9%) described as both pilot and feasibility studies. Over all included pilot and feasibility studies (nโ€‰=โ€‰761), the median target sample size was 30 (IQR 20โ€“50). This was consistent when split by those labelled as a pilot or feasibility study. Slightly larger sample sizes (medianโ€‰=โ€‰33, IQR 20โ€“50) were shown for those labelled both pilot and feasibility (nโ€‰=โ€‰69). Studies with a continuous outcome (nโ€‰=โ€‰592) had a median target sample size of 30 (IQR 20โ€“43) whereas, in line with recommendations, this was larger for those with binary outcomes (medianโ€‰=โ€‰50, IQR 25โ€“81, nโ€‰=โ€‰97). There was no descriptive difference in the target sample size based on funder type. In studies where the achieved sample size was available (nโ€‰=โ€‰301), 173 (57%) did not reach their sample size target; however, the median difference between the target and actual sample sizes was small at just minus four participants (IQRโ€‰โˆ’25โ€“0). Conclusions Target sample sizes for pilot and feasibility studies have remained constant since the last review in 2013. Most studies in the review satisfy the earlier and more lenient recommendations however do not satisfy the most recent largest recommendation. Additionally, most studies did not reach their target sample size meaning the information collected may not be sufficient to estimate the required parameters for future definitive randomised controlled trials.

BioMed Central

#statstab #417 {pwrss} Practical Power Analysis in R

Thoughts: Some useful vignettes for conducting power analyses for various designs, constraints, and data types.

#poweranalysis #samplesize #r #power #guide #tutorial #sesoi #equivalence #tost #rstats

https://cran.r-project.org/web/packages/pwrss/vignettes/examples.html

Practical Power Analysis in R