Lasse Elsemüller

65 Followers
182 Following
23 Posts
PhD candidate in statistical modeling @ Heidelberg University. Interested in deep learning, Bayesian stats & cognitive modeling.

Our new short paper on Amortized Bayesian Workflow is out!✨

We developed an adaptive workflow that combines the speed of amortized inference with the reliability of MCMC on thousands of datasets.

🔗Link: https://arxiv.org/abs/2409.04332

The whole is more than the sum of its parts 🧵👇

Amortized Bayesian Workflow

Bayesian inference often faces a trade-off between computational speed and sampling accuracy. We propose an adaptive workflow that integrates rapid amortized inference with gold-standard MCMC techniques to achieve a favorable combination of both speed and accuracy when performing inference on many observed datasets. Our approach uses principled diagnostics to guide the choice of inference method for each dataset, moving along the Pareto front from fast amortized sampling via generative neural networks to slower but guaranteed-accurate MCMC when needed. By reusing computations across steps, our workflow synergizes amortized and MCMC-based inference. We demonstrate the effectiveness of this integrated approach on several synthetic and real-world problems with tens of thousands of datasets, showing efficiency gains while maintaining high posterior quality.

arXiv.org

Our work on sensitivity-aware amortized Bayesian inference is now published in #TMLR: https://openreview.net/forum?id=Kxtpa9rvM0

TL;DR: Statistical analyses involve countless choices, but systematically evaluating the impact of these choices quickly becomes infeasible for complex models. Our framework enables amortized and thus efficient sensitivity analyses for all major choices in a (simulation-based) Bayesian workflow.

@ho @MarvinSchmitt @paul_buerkner

Sensitivity-Aware Amortized Bayesian Inference

Sensitivity analyses reveal the influence of various modeling choices on the outcomes of statistical analyses. While theoretically appealing, they are overwhelmingly inefficient for complex...

OpenReview

Nice blog post on informative priors for correlation matrices: A joint prior tcombining LKJ (for positive semi-definiteness) and a normal prior (to inform the magnitude of individual correlations).

http://srmart.in/informative-priors-for-correlation-matrices-an-easy-approach/

@smartin2018, did you ever finish the short paper on this, which Sean mentions in the comments of the blog post?

#bayes #bayesian

Informative priors for correlation matrices: An easy approach | Stephen R. Martin, PhD

Stephen R. Martin, PhD

I finally wanted to understand a bit more about generative adversarial networks by building the smallest one I could possibly think of using #torch in  (!!)

Here is small (badly written because not a lot of time) blogpost about it: https://erikjanvankesteren.nl/blog/tiny_gan

The hot mess theory of AI misalignment (+ an experiment!)
https://sohl-dickstein.github.io/2023/03/09/coherence.html

There are two ways an AI could be misaligned. It could monomaniacally pursue the wrong goal (supercoherence), or it could act in ways that don't pursue any consistent goal (hot mess).

The hot mess theory of AI misalignment: More intelligent agents behave less coherently

This blog is intended to be a place to share ideas and results that are too weird, incomplete, or off-topic to turn into an academic paper, but that I think may be important. Let me know what you think! Contact links to the left.

Jascha’s blog

SMiP Summer School 2023 (26-30 June) at the University of Mannheim

Targeted at young researchers in psychology with a special interest in statistical modeling and quantitative methods. Participants of the Summer School can attend one of the following workshops:

- Dynamic longitudinal modeling
- An introduction to cognitive modeling
- Modeling heterogeneity of response processes in item response theory
- Multilevel measurement models

https://www.uni-mannheim.de/smip-summerschool/

SMIP-summerschool | Universität Mannheim

I've written a primer on the "Within/Between Problem" of Psychology.

The pre-print is available here: https://psyarxiv.com/7zgkx/

And the accompanying app here:
https://utrecht-university.shinyapps.io/withinbetweenapp/

Comparing Bayesian hierarchical models can be challenging, especially when not all models have tractable likelihoods. Martin Schnuerch, Paul Bürkner, Stefan Radev and I developed a deep learning method to compare hierarchical models via Bayes factors or posterior model probabilities.

You can find the preprint with associated code at https://arxiv.org/abs/2301.11873.
We are now working on making our method available in the #BayesFlow Python library for amortized Bayesian inference.

A Deep Learning Method for Comparing Bayesian Hierarchical Models

Bayesian model comparison (BMC) offers a principled approach for assessing the relative merits of competing computational models and propagating uncertainty into model selection decisions. However, BMC is often intractable for the popular class of hierarchical models due to their high-dimensional nested parameter structure. To address this intractability, we propose a deep learning method for performing BMC on any set of hierarchical models which can be instantiated as probabilistic programs. Since our method enables amortized inference, it allows efficient re-estimation of posterior model probabilities and fast performance validation prior to any real-data application. In a series of extensive validation studies, we benchmark the performance of our method against the state-of-the-art bridge sampling method and demonstrate excellent amortized inference across all BMC settings. We then showcase our method by comparing four hierarchical evidence accumulation models that have previously been deemed intractable for BMC due to partly implicit likelihoods. Additionally, we demonstrate how transfer learning can be leveraged to enhance training efficiency. We provide reproducible code for all analyses and an open-source implementation of our method.

arXiv.org

New in Behaviormetrika with Craig Stark: multinomial processing tree models of several versions of the Mnemonic Similarity Task, all implemented as Bayesian graphical models in JAGS. The basic models are then extended hierarchically and with latent mixtures to measure pattern separation in a fine-grained way and capture subgroups of individual differences.

https://rdcu.be/c3Tnr

Bayesian modeling of the Mnemonic Similarity Task using multinomial processing trees

Interesting #Nature paper:

Large teams develop and small teams disrupt science and technology
https://www.nature.com/articles/s41586-019-0941-9

"Work from larger teams builds on more-recent and popular developments, and attention to their work comes immediately. By contrast, contributions by smaller teams search more deeply into the past, are viewed as disruptive to science and technology and succeed further into the future—if at all."

Large teams develop and small teams disrupt science and technology - Nature

Analyses of the output produced by large versus small teams of researchers and innovators demonstrate that their work differs systematically in the extent to which it disrupts or develops existing science and technology.

Nature