#statstab #466 Bayesian workflow: Prior determination, predictive checks and sensitivity analyses
Thoughts: Having a good bayesian work flow can be challenging with complex models.
#priors #bayesian #sensitivityanalysis #posterior #ppc #brms
#statstab #459 Getting Comfortable with Expressing Beliefs as Distributions
Thoughts: Bayesian stats requires a good understanding of priors, but these are often unintuitive. Plots help.
#bayesian #priors #ggplot #r #dataviz #learing #education
https://brian-lookabaugh.github.io/website-brianlookabaugh/blog/2025/priors-distributions/
#statstab #445 What are credible priors and what are skeptical priors?
Thoughts: An excellent thread on prior elicitation by some of the big names in the field (frequentist and bayesian).
#priors #bayesian #bayes #likelihood #medicine #clinical #debate
https://discourse.datamethods.org/t/what-are-credible-priors-and-what-are-skeptical-priors/580
A few weeks ago Dan Scharfstein asked a group of colleagues about how to report an odds ratio of 1.70 with 95% confidence limits of 0.96 and 3.02. Back-calculating from these statistics gives a two-sided P of 0.06 or 0.07, corresponding to an S-value (surprisal, log base 2 of P) of about 4 bits of information against the null hypothesis of OR=1. So, not much evidence against the null from the result, but still favoring a positive association over an inverse one, and so thought worthy of reportin...
#statstab #427 Incorporating Historical Control Data Into an RCT
Thoughts: In frequentist stats, historical controls are a dangerous proposition. But in bayesian, they may have a more useful purpose.
Historical data (HD) are being used increasingly in Bayesian analyses when it is difficult to randomize enough patients to study effectiveness of a treatment. Such analyses summarize observational studiesβ posterior effectiveness distribution (for two-arm HD) or standard-of-care outcome distribution (for one-arm HD) then turn that into a prior distribution for an RCT. The prior distribution is then flattened somewhat to discount the HD. Since Bayesian modeling makes it easy to fit multiple models at once, incorporation of the raw HD into the RCT analysis and discounting HD by explicitly modeling bias is perhaps a more direct approach than lowering the effective sample size of HD. Trust the HD sample size but not what the HD is estimating, and realize several benefits from using raw HD in the RCT analysis instead of relying on HD summaries that may hide uncertainties.
#statstab #402 On Bayes factors for hypothesis tests {emBayes Factor}
Thoughts: On bsky there were renewed debates about BFs. This paper provides "better" priors (mixture t centred on the ES). Also some p-value BFs
#bayesian #bayesfactor #priors #cohend
https://link.springer.com/article/10.3758/s13423-024-02612-2
We develop alternative families of Bayes factors for use in hypothesis tests as alternatives to the popular default Bayes factors. The alternative Bayes factors are derived for the statistical analyses most commonly used in psychological research β one-sample and two-sample t tests, regression, and ANOVA analyses. They possess the same desirable theoretical and practical properties as the default Bayes factors and satisfy additional theoretical desiderata while mitigating against two features of the default priors that we consider implausible. They can be conveniently computed via an R package that we provide. Furthermore, hypothesis tests based on Bayes factors and those based on significance tests are juxtaposed. This discussion leads to the insight that default Bayes factors as well as the alternative Bayes factors are equivalent to test-statistic-based Bayes factors as proposed by Johnson. Journal of the Royal Statistical Society Series B: Statistical Methodology, 67, 689β701. (2005). We highlight test-statistic-based Bayes factors as a general approach to Bayes-factor computation that is applicable to many hypothesis-testing problems for which an effect-size measure has been proposed and for which test power can be computed.
'Posterior Concentrations of Fully-Connected Bayesian Neural Networks with General Priors on the Weights', by Insung Kong, Yongdai Kim.
http://jmlr.org/papers/v26/24-0425.html
#priors #sparse #bnn