'Integral Probability Metrics Meet Neural Networks: The Radon-Kolmogorov-Smirnov Test', by Seunghoon Paik, Michael Celentano, Alden Green, Ryan J. Tibshirani.
http://jmlr.org/papers/v26/24-0245.html
#nonparametric #distributions #kolmogorov
'Integral Probability Metrics Meet Neural Networks: The Radon-Kolmogorov-Smirnov Test', by Seunghoon Paik, Michael Celentano, Alden Green, Ryan J. Tibshirani.
http://jmlr.org/papers/v26/24-0245.html
#nonparametric #distributions #kolmogorov
'Learning causal graphs via nonlinear sufficient dimension reduction', by Eftychia Solea, Bing Li, Kyongwon Kim.
http://jmlr.org/papers/v26/24-0048.html
#causal #nonparametric #observational
'From Sparse to Dense Functional Data in High Dimensions: Revisiting Phase Transitions from a Non-Asymptotic Perspective', by Shaojun Guo, Dong Li, Xinghao Qiao, Yizhu Wang.
http://jmlr.org/papers/v26/23-1578.html
#sparse #nonparametric #smoothing
'Deep Nonparametric Quantile Regression under Covariate Shift', by Xingdong Feng, Xin He, Yuling Jiao, Lican Kang, Caixing Wang.
http://jmlr.org/papers/v25/24-0906.html
#quantile #nonparametric #reweighted
'On the Optimality of Gaussian Kernel Based Nonparametric Tests against Smooth Alternatives', by Tong Li, Ming Yuan.
http://jmlr.org/papers/v25/20-1228.html
#nonparametric #gaussian #kernels
#statstab #214 Two-sample Mann–Whitney U Test
Thoughts: Less about the non-parametric test and more about this awesome site. It covers the test hypothesis , checking assumptions, AND full reporting, with plots and effects!
#statstab #210 Effect Sizes for ANOVAs {effectsize}
Thoughts: ANOVAs are rarely what ppl want to report, but if it is then report an effect size! Just mind the % for the CIs 😉
#ANOVA #effectsize #APA #reporting #nonparametric #eta2 #ordinal
https://easystats.github.io/effectsize/articles/anovaES.html
'Distribution Learning via Neural Differential Equations: A Nonparametric Statistical Perspective', by Youssef Marzouk, Zhi (Robert) Ren, Sven Wang, Jakob Zech.
http://jmlr.org/papers/v25/23-1280.html
#distributions #nonparametric #entropy
#statstab #157 Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data
Thoughts: You can never have enough (confusing) effect size measure. At least make them appropriate for your data.
In psychological science, the “new statistics” refer to the new statistical practices that focus on effect size (ES) evaluation instead of conventional null-hypothesis significance testing (Cumming, Psychological Science, 25, 7–29, 2014). In a two-independent-samples scenario, Cohen’s (1988) standardized mean difference (d) is the most popular ES, but its accuracy relies on two assumptions: normality and homogeneity of variances. Five other ESs—the unscaled robust d (d r * ; Hogarty & Kromrey, 2001), scaled robust d (d r ; Algina, Keselman, & Penfield, Psychological Methods, 10, 317–328, 2005), point-biserial correlation (r pb ; McGrath & Meyer, Psychological Methods, 11, 386–401, 2006), common-language ES (CL; Cliff, Psychological Bulletin, 114, 494–509, 1993), and nonparametric estimator for CL (A w ; Ruscio, Psychological Methods, 13, 19–30, 2008)—may be robust to violations of these assumptions, but no study has systematically evaluated their performance. Thus, in this simulation study the performance of these six ESs was examined across five factors: data distribution, sample, base rate, variance ratio, and sample size. The results showed that A w and d r were generally robust to these violations, and A w slightly outperformed d r . Implications for the use of A w and d r in real-world research are discussed.