#statstab #471 Analysis Resources for N-of-1 research
Thoughts: Some cool and some questionable stuff, but a good place to start looking.
#Nof1 #analysis #resources #estimand #methods #sced #stats #smallsample #scd
#statstab #471 Analysis Resources for N-of-1 research
Thoughts: Some cool and some questionable stuff, but a good place to start looking.
#Nof1 #analysis #resources #estimand #methods #sced #stats #smallsample #scd
#statstab #311 The analysis of continuous data from n-of-1 trials using paired cycles: a simple tutorial
Thoughts: @StephenSenn shows how to treat multiple #nof1 studies as a meta-analysis.
#sced #nof1 #metaanalysis #tutorial #clinical
https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-024-07964-7
N-of-1 trials are defined and the popular paired cycle design is introduced, together with an explanation as to how suitable sequences may be constructed.Various approaches to analysing such trials are explained and illustrated using a simulated data set. It is explained how choosing an appropriate analysis depends on the question one wishes to answer. It is also shown that for a given question, various equivalent approaches to analysis can be found, a fact which may be exploited to expand the possible software routines that may be used.Sets of N-of-1 trials are analogous to sets of parallel group trials. This means that software for carrying out meta-analysis can be used to combine results from N-of-1 trials. In doing so, it is necessary to make one important change, however. Because degrees of freedom for estimating variances for individual subjects will be scarce, it is advisable to estimate local standard errors using pooled variances. How this may be done is explained and fixed and random effect approaches to combining results are illustrated.
Our last paper is out π
"This is the first study to show that physical activity is potentially effective in women with breast cancer and severe depressive and anxiety symptoms"
#cancer #exercise #sleep #depression #nof1 #sced
https://www.tandfonline.com/doi/full/10.1080/28352610.2024.2435666#abstract
#statstab #220 Small is beautiful: In defense of the small-N design
Thoughts: "high power and inferential validity of the small-N design, in contrast to the lower power and inferential indeterminacy of the large-N design"
The dominant paradigm for inference in psychology is a null-hypothesis significance testing one. Recently, the foundations of this paradigm have been shaken by several notable replication failures. One recommendation to remedy the replication crisis is to collect larger samples of participants. We argue that this recommendation misses a critical point, which is that increasing sample size will not remedy psychologyβs lack of strong measurement, lack of strong theories and models, and lack of effective experimental control over error variance. In contrast, there is a long history of research in psychology employing small-N designs that treats the individual participant as the replication unit, which addresses each of these failings, and which produces results that are robust and readily replicated. We illustrate the properties of small-N and large-N designs using a simulated paradigm investigating the stage structure of response times. Our simulations highlight the high power and inferential validity of the small-N design, in contrast to the lower power and inferential indeterminacy of the large-N design. We argue that, if psychology is to be a mature quantitative science, then its primary theoretical aim should be to investigate systematic, functional relationships as they are manifested at the individual participant level and that, wherever possible, it should use methods that are optimized to identify relationships of this kind.
#statstab #154 {fxl} package for plotting Single Case Designs (SCD)
Thoughts: SCDs and SCEDs are very underused in Psychology. Since I've discovered them I've promoted their use. Here, some nice (publication level) plots.
Our last #preprint
Feasibility and acceptability of a remote #physicalactivity intervention coupled with #SMS in #women with #breast #cancer and severe #depressive or #anxiety symptoms
https://osf.io/preprints/psyarxiv/zyh36
#sced #nof1 #workingalliance
#research #academia #exercise #psychology #HealthResearch
#ehealth
#statstab #10 A calculator for single-case effect size indices
Thoughts: R2 said my N=1 analysis was "the effect of Bob" and was meaningless. Later, I came across Single Case Experimental Design research (SCED). 1 case can have value, just different!
Provides R functions for calculating basic effect size indices for single-case designs, including several non-overlap measures and parametric effect size measures, and for estimating the gradual effects model developed by Swan and Pustejovsky (2018) <DOI:10.1080/00273171.2018.1466681>. Standard errors and confidence intervals (based on the assumption that the outcome measurements are mutually independent) are provided for the subset of effect sizes indices with known sampling distributions.
A good synthesis about current limits in #nof1 and #SCED (studies with a single #case #experimental #design ) in #academic #psychiatry
N-of-1 trials, a special case of Single Case Experimental Designs (SCEDs), are prominent in clinical medical research and specifically psychiatry due to the growing significance of precision/personalized medicine. It is imperative that these clinical trials be conducted, and their data analyzed, using the highest standards to guard against threats to validity. This systematic review examined publications of medical N-of-1 trials to examine whether they meet (a) the evidence standards and (b) the criteria for demonstrating evidence of a relation between an independent and an outcome variable per the What Works Clearinghouse (WWC) standards for SCEDs. We also examined the appropriateness of the data analytic techniques in the special context of N-of-1 designs. We searched for empirical journal articles that used N-of-1 design and published between 2013 and 2022 in PubMed and Web of Science. Protocols or methodological papers and studies that did not manipulate a medical condition were excluded. We reviewed 115 articles; 4 (3.48%) articles met all WWC evidence standards. Most (99.1%) failed to report an appropriate design-comparable effect size; neither did they report a confidence/credible interval, and 47.9% reported neither the raw data rendering meta-analysis impossible. Most (83.8%) ignored autocorrelation and did not meet distributional assumptions (65.8%). These methodological problems could lead to significantly inaccurate effect sizes. It is necessary to implement stricter guidelines for the clinical conduct and analyses of medical N-of-1 trials. Reporting neither raw data nor design-comparable effect sizes renders meta-analysis impossible and is antithetical to the spirit of open science.