Cordial congrats to Michel Talagrand on winning this year's Abel Prize, well deserved! His works on bounding #stochastic_process are also of great value for #cosmology ! Imagine we lacked knowledge of bounds on #GaussianProcesses !
Cordial congrats to Michel Talagrand on winning this year's Abel Prize, well deserved! His works on bounding #stochastic_process are also of great value for #cosmology ! Imagine we lacked knowledge of bounds on #GaussianProcesses !
Via Alexander Terenin: stochastic gradient descent can be used as an efficient approximate sampling algorithm for Gaussian processes. Looks super cool: https://arxiv.org/abs/2306.11589
Gaussian processes are a powerful framework for quantifying uncertainty and for sequential decision-making but are limited by the requirement of solving linear systems. In general, this has a cubic cost in dataset size and is sensitive to conditioning. We explore stochastic gradient algorithms as a computationally efficient method of approximately solving these linear systems: we develop low-variance optimization objectives for sampling from the posterior and extend these to inducing points. Counterintuitively, stochastic gradient descent often produces accurate predictions, even in cases where it does not converge quickly to the optimum. We explain this through a spectral characterization of the implicit bias from non-convergence. We show that stochastic gradient descent produces predictive distributions close to the true posterior both in regions with sufficient data coverage, and in regions sufficiently far away from the data. Experimentally, stochastic gradient descent achieves state-of-the-art performance on sufficiently large-scale or ill-conditioned regression tasks. Its uncertainty estimates match the performance of significantly more expensive baselines on a large-scale Bayesian optimization task.
Funded (£20,410 4-year tax-free stipend) PhD positions available with me at Imperial College London
#PathIntegrals, #DFT, #CompChem, #MachineLearning, #GaussianProcesses, #JuliaLang, #MonteCarlo, excitement, adventure and really wild things!
Next interview round closing Friday 14th July 2023 .
More details and exemplar projects in this Google doc: https://docs.google.com/document/d/1wiG-T8uqgq_-h-Btu1tecmdrZbK-zS006eB6BKGqFgI/edit?usp=sharing
2023 Frost group PhD adverts Overview / TL;DR Introduction Research Group Motivation / Philosophy Eligibility Funding Notes How to apply Project 1: Simulating charge transfer and recombination at an upconverting organic semiconductor interface Project 2: Modelling mixed electron ion conduction ...
@tylerjburch Yes, I hear you 😓
You've likely already fixed your installation and I'm not sure whether you're using #jupyter, but I found this guide really helpful:
https://pkseeg.com/post/jupyter-venv/
Now, I always warn people to never mess with the base installation of #python on a machine but use (virtual) environments instead.
Good luck with the #GaussianProcesses, I'm going through the #pymc tutorials for it right now 😎
I have, once again, made the strange choice to write a blog. This one is about Gaussian Process and, particularly, about what the Markov property looks like when you don't have a linear notion of time to help you define a past and present.
Like all my GP posts, this one is wildly technical but with an aim towards being somewhat useful. The information here is hard to find unless you want to read a 400 page book translated from Russian
https://dansblog.netlify.app/posts/2023-01-21-markov/markov.html
Watch our own @sethaxen summarize our recent #NeurIPS2022 workshop paper on modeling European #paleoclimate using #GaussianProcesses!
👋 This is my first time attending @NeuripsConf (virtually to reduce carbon emissions).
On Friday I'll join the workshop "Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems," where we have a paper, poster, and lightning talk on GPs for modeling #paleoclimate.
If you're attending and want to chat about #GaussianProcesses, probabilistic programming (#ProbProg), or @ArviZ, ping me!