Gauguin, Descartes, Bayes: A Diurnal Golem’s Brain
https://dl.acm.org/doi/pdf/10.1145/3759429.3762631
#stats #probprog
Let me tell you about the most frustrating part of Bayesian modeling. Often the first models you build either make bad assumptions or contain bugs. Both can cause the already expensive step of drawing posterior samples using Markov Chain Monte Carlo (MCMC) to become unbearably slow, but those samples are often the best way to check if our model makes sense. So we draw samples, encode some more better assumptions, draw more samples, fix some bugs, draw more samples, go check our error model against the laboratory equipment, rinse and repeat. And gradually we move toward higher quality, more useful models, which often can be sampled much faster. This is known as the folk theorem of statistical computing.
If you're at #BayesComp2023 and see me, say hi! I especially like talking about #ProbProg, #JuliaLang, @TuringLang, @ArviZ, and how bad I am at skiing!
Tonight I'm presenting a poster about using Pathfinder.jl to initialize HMC and diagnose computational issues.
🚨 New #JuliaLang package! StanLogDensityProblems.jl is a really basic package that implements the LogDensityProblems.jl interface for @mcmc_stan models, built on BridgeStan.jl. It also integrates with PosteriorDB.jl, which makes it really easy to benchmark a new inference method against a large number of models. #ProbProg #MCMCStan
👋 This is my first time attending @NeuripsConf (virtually to reduce carbon emissions).
On Friday I'll join the workshop "Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems," where we have a paper, poster, and lightning talk on GPs for modeling #paleoclimate.
If you're attending and want to chat about #GaussianProcesses, probabilistic programming (#ProbProg), or @ArviZ, ping me!
Soon Turing.jl users will be able to natively store all sampling outputs in an @ArviZ InferenceData object.
To experiment with the bleeding edge, check out https://github.com/sethaxen/DynamicPPLInferenceObjects.jl!