Bayesian Data Analysis, Third edition [pdf]
https://sites.stat.columbia.edu/gelman/book/BDA3.pdf
#HackerNews #BayesianDataAnalysis #DataScience #PDF #Statistics #MachineLearning
Bayesian Data Analysis, Third edition [pdf]
https://sites.stat.columbia.edu/gelman/book/BDA3.pdf
#HackerNews #BayesianDataAnalysis #DataScience #PDF #Statistics #MachineLearning
The lecturer: 'Let's take a look at how we do that. This is going to be surprisingly easy.'
The lecture slides:
(No, but seriously, this is a great course for those aspiring to a better understanding of #bayesiandataanalysis.)
Computing Bayes Factors is difficult in general. But for issues like "parameter > number", there is a nice method: _encompassing models_. It's simple, elegant and powerful, i.o.w., it's beautifully Bayesian. 🤩
Here is an overview: tinyurl.com/3pxs2pc9
Abstract. Words of estimative probability (WEPs), such as ‘possible’ and ‘a good chance’, provide an efficient means for expressing probability under uncertainty. Current semantic theories assume that WEPs denote crisp thresholds on the probability scale, but experimental data indicate that their use is characterised by gradience and focality. Here, we implement and compare computational models of the use of WEPs to explain novel production data. We find that, among models incorporating cognitive limitations and assumptions about goal-directed speech, a model that implements a threshold-based semantics explains the data equally well as a model that semantically encodes patterns of gradience and focality. We further validate the model by distinguishing between participants with more or fewer autistic traits, as measured with the Autism Spectrum Quotient test. These traits include communicative difficulties. We show that these difficulties are reflected in the rationality parameter of the model, which modulates the probability that the speaker selects the pragmatically optimal message.