Lasse Elsemüller

65 Followers
182 Following
23 Posts
PhD candidate in statistical modeling @ Heidelberg University. Interested in deep learning, Bayesian stats & cognitive modeling.

Our work on sensitivity-aware amortized Bayesian inference is now published in #TMLR: https://openreview.net/forum?id=Kxtpa9rvM0

TL;DR: Statistical analyses involve countless choices, but systematically evaluating the impact of these choices quickly becomes infeasible for complex models. Our framework enables amortized and thus efficient sensitivity analyses for all major choices in a (simulation-based) Bayesian workflow.

@ho @MarvinSchmitt @paul_buerkner

Sensitivity-Aware Amortized Bayesian Inference

Sensitivity analyses reveal the influence of various modeling choices on the outcomes of statistical analyses. While theoretically appealing, they are overwhelmingly inefficient for complex...

OpenReview

Comparing Bayesian hierarchical models can be challenging, especially when not all models have tractable likelihoods. Martin Schnuerch, Paul Bürkner, Stefan Radev and I developed a deep learning method to compare hierarchical models via Bayes factors or posterior model probabilities.

You can find the preprint with associated code at https://arxiv.org/abs/2301.11873.
We are now working on making our method available in the #BayesFlow Python library for amortized Bayesian inference.

A Deep Learning Method for Comparing Bayesian Hierarchical Models

Bayesian model comparison (BMC) offers a principled approach for assessing the relative merits of competing computational models and propagating uncertainty into model selection decisions. However, BMC is often intractable for the popular class of hierarchical models due to their high-dimensional nested parameter structure. To address this intractability, we propose a deep learning method for performing BMC on any set of hierarchical models which can be instantiated as probabilistic programs. Since our method enables amortized inference, it allows efficient re-estimation of posterior model probabilities and fast performance validation prior to any real-data application. In a series of extensive validation studies, we benchmark the performance of our method against the state-of-the-art bridge sampling method and demonstrate excellent amortized inference across all BMC settings. We then showcase our method by comparing four hierarchical evidence accumulation models that have previously been deemed intractable for BMC due to partly implicit likelihoods. Additionally, we demonstrate how transfer learning can be leveraged to enhance training efficiency. We provide reproducible code for all analyses and an open-source implementation of our method.

arXiv.org