#PowerAnalyses for model misspecification and #ResponseShift detection w structural equation models
https://link.springer.com/article/10.1007/s11136-024-03605-3

Whether the identified effects can be interpreted as RS depends on assumptions, e.g.,
https://link.springer.com/article/10.1007/s11136-021-02846-w

https://link.springer.com/article/10.1007/s11136-019-02248-z

#Psychometrics #HRQL

Power analyses for measurement model misspecification and response shift detection with structural equation modeling - Quality of Life Research

Purpose Statistical power for response shift detection with structural equation modeling (SEM) is currently underreported. The present paper addresses this issue by providing worked-out examples and syntaxes of power calculations relevant for the statistical tests associated with the SEM approach for response shift detection. Methods Power calculations and related sample-size requirements are illustrated for two modelling goals: (1) to detect misspecification in the measurement model, and (2) to detect response shift. Power analyses for hypotheses regarding (exact) overall model fit and the presence of response shift are demonstrated in a step-by-step manner. The freely available and user-friendly R-package lavaan and shiny-app ‘power4SEM’ are used for the calculations. Results Using the SF-36 as an example, we illustrate the specification of null-hypothesis (H0) and alternative hypothesis (H1) models to calculate chi-square based power for the test on overall model fit, the omnibus test on response shift, and the specific test on response shift. For example, we show that a sample size of 506 is needed to reject an incorrectly specified measurement model, when the actual model has two-medium sized cross loadings. We also illustrate power calculation based on the RMSEA index for approximate fit, where H0 and H1 are defined in terms of RMSEA-values. Conclusion By providing accessible resources to perform power analyses and emphasizing the different power analyses associated with different modeling goals, we hope to facilitate the uptake of power analyses for response shift detection with SEM and thereby enhance the stringency of response shift research.

SpringerLink

In #ISOQOL-s #QualityTALK, Carolyn Schwartz provides an insight into her development as a researcher
https://www.isoqol.org/the-story-behind-the-elephant-in-the-room-a-long-journey-begins-with-a-single-step/

Reflecting on research experiences with people living with Multiple Sclerosis, she realised that what was important to patients did not form a consistent hierarchy of domains for each patient over time. This led her into #ResponseShift research and she discusses her most recent #SysReview on the importance of the phenomenon in #RCTs #Trials:
https://jpro.springeropen.com/articles/10.1186/s41687-022-00510-6

The story behind “The Elephant in the Room”: A long journey begins with a single step* | ISOQOL

A #SysReview from the "#ResponseShift – in Sync Working Group" analysed 150 studies
https://link.springer.com/article/10.1007/s11136-023-03495-x

Apart from the interest in the psychological phenomenon, the relative size of such effects compared to intervention effects (e.g., in #RCT https://link.springer.com/article/10.1186/s41687-022-00510-6) is very important for #StudyDesign in #HRQL research (see also https://doi.org/10.1007/s11136-023-03347-8).

Therefore an interesting descriptive finding: it was possible only for 105 of these studies to calculate #EffectSize-s.

#Psychometrics

Response shift results of quantitative research using patient-reported outcome measures: a descriptive systematic review - Quality of Life Research

Purpose The objective of this systematic review was to describe the prevalence and magnitude of response shift effects, for different response shift methods, populations, study designs, and patient-reported outcome measures (PROM)s. Methods A literature search was performed in MEDLINE, PSYCINFO, CINAHL, EMBASE, Social Science Citation Index, and Dissertations & Theses Global to identify longitudinal quantitative studies that examined response shift using PROMs, published before 2021. The magnitude of each response shift effect (effect sizes, R-squared or percentage of respondents with response shift) was ascertained based on reported statistical information or as stated in the manuscript. Prevalence and magnitudes of response shift effects were summarized at two levels of analysis (study and effect levels), for recalibration and reprioritization/reconceptualization separately, and for different response shift methods, and population, study design, and PROM characteristics. Analyses were conducted twice: (a) including all studies and samples, and (b) including only unrelated studies and independent samples. Results Of the 150 included studies, 130 (86.7%) detected response shift effects. Of the 4868 effects investigated, 793 (16.3%) revealed response shift. Effect sizes could be determined for 105 (70.0%) of the studies for a total of 1130 effects, of which 537 (47.5%) resulted in detection of response shift. Whereas effect sizes varied widely, most median recalibration effect sizes (Cohen’s d) were between 0.20 and 0.30 and median reprioritization/reconceptualization effect sizes rarely exceeded 0.15, across the characteristics. Similar results were obtained from unrelated studies. Conclusion The results draw attention to the need to focus on understanding variability in response shift results: Who experience response shifts, to what extent, and under which circumstances?

SpringerLink