#statstab #514 A puzzle of proportions
Thoughts: "Two popular Bayesian tests can yield dramatically different conclusions"
Model specification is important.
#statstab #514 A puzzle of proportions
Thoughts: "Two popular Bayesian tests can yield dramatically different conclusions"
Model specification is important.
#statstab #486 Testing Bayesian Informative Hypotheses in Five Steps With JASP and R {bain}
Thoughts: The BAIN module let's you go beyond "effect vs no effect" by specifying contrasts (hyp) & obtaining fractional BFs.
#bain #jasp #bayesfactor #bayesian #rstats #r #hypothesis #nhbt #BF #methods #tutorial #guide
https://share.google/cTDvBO7SQM9CpNqlU
#statstab #467 Hypothesis testing, model selection, model comparison some thoughts
Thoughts: An excellent (but too short) discussion on bayesian inference.
#bayesian #bayesfactor #modelselection #inference #NBHT #BF #ROPE #primer
EDIT: This was an attempt to write guidance. It turns out I stepped quite far from my depth and the text sounded much more conclusive than it should. I think it is correct to currently just classify it as “some thoughts” rather than a guidance. I still think it is useful to have a place to list possible approaches, but the text definitely needs more work. Sorry for the confusion. Coming from classical statistics background Stan users often want to be able to test some sort of null hypothesis. S...
#statstab #453 {Bayes Power}
A General Application of Power and Sample Size Calculation for the Bayes Factors
Thoughts: Blending frequentist notions of power with bayes hypothesis testing.
#poweranalysis #bayesian #bayesfactor #errorrate #rstats #nhbt
#statstab #443 Dienes Bayes factor calculator
Thoughts: Dienes presents a different way to compute BFs using the sample data. But, this can be seen as an acceptable double-dipping.
#statstab #402 On Bayes factors for hypothesis tests {emBayes Factor}
Thoughts: On bsky there were renewed debates about BFs. This paper provides "better" priors (mixture t centred on the ES). Also some p-value BFs
#bayesian #bayesfactor #priors #cohend
https://link.springer.com/article/10.3758/s13423-024-02612-2
We develop alternative families of Bayes factors for use in hypothesis tests as alternatives to the popular default Bayes factors. The alternative Bayes factors are derived for the statistical analyses most commonly used in psychological research – one-sample and two-sample t tests, regression, and ANOVA analyses. They possess the same desirable theoretical and practical properties as the default Bayes factors and satisfy additional theoretical desiderata while mitigating against two features of the default priors that we consider implausible. They can be conveniently computed via an R package that we provide. Furthermore, hypothesis tests based on Bayes factors and those based on significance tests are juxtaposed. This discussion leads to the insight that default Bayes factors as well as the alternative Bayes factors are equivalent to test-statistic-based Bayes factors as proposed by Johnson. Journal of the Royal Statistical Society Series B: Statistical Methodology, 67, 689–701. (2005). We highlight test-statistic-based Bayes factors as a general approach to Bayes-factor computation that is applicable to many hypothesis-testing problems for which an effect-size measure has been proposed and for which test power can be computed.
#statstab #382 The JASP Guidelines for Conducting and Reporting a Bayesian Analysis
Thoughts: @JASPStats is often people's first attempt at Bayesian statistics. But proper inference and reporting is crucial.
#JASP #Bayesian #BayesFactor #guide #tutorial
https://link.springer.com/article/10.3758/s13423-020-01798-5
Despite the increasing popularity of Bayesian inference in empirical research, few practical guidelines provide detailed recommendations for how to apply Bayesian procedures and interpret the results. Here we offer specific guidelines for four different stages of Bayesian statistical reasoning in a research setting: planning the analysis, executing the analysis, interpreting the results, and reporting the results. The guidelines for each stage are illustrated with a running example. Although the guidelines are geared towards analyses performed with the open-source statistical software JASP, most guidelines extend to Bayesian inference in general.
#statstab #360 Bayes Factor Design Analysis {bfda}
Thoughts: Sample size planning is confusing at first with Bayesian. But BFDA is the quick answer.
#statstab #359 A Pragmatic Approach to Statistical Testing and Estimation (PASTE)
Thought: A (basic) guide to some alternatives to p-values: bayesian posterior intervals, Bayes Factors, and AIC.
The p-value has dominated research in education and related fields and a statistically non-significant p-value is quite commonly interpreted as ‘confirming’ the null hypothesis (H0) of ‘equivalence’. This is unfortunate, because p-values are not fit for that purpose. This paper discusses three alternatives to the traditional p-value that unfortunately have remained underused but can provide evidence in favor of ‘equivalence’ relative to ‘non-equivalence’: two one-sided tests (TOST) equivalence testing, Bayesian hypothesis testing, and information criteria. TOST equivalence testing and p-values both rely on concepts of statistical significance testing and can both be done with confidence intervals, but treat H0 and the alternative hypothesis (H1) differently. Bayesian hypothesis testing and the Bayesian credible interval aka posterior interval provide Bayesian alternatives to traditional p-values, TOST equivalence testing, and confidence intervals. However, under conditions outlined in this paper, confidence intervals and posterior intervals may yield very similar interval estimates. Moreover, Bayesian hypothesis testing and information criteria provide fairly easy to use alternatives to statistical significance testing when multiple competing models can be compared. Based on these considerations, this paper outlines a pragmatic approach to statistical testing and estimation (PASTE) for research in education and related fields. In a nutshell, PASTE states that all of the alternatives to p-values discussed in this paper are better than p-values, that confidence intervals and posterior intervals may both provide useful interval estimates, and that Bayesian hypothesis testing and information criteria should be used when the comparison of multiple models is concerned.