Hello, I am a statistician at Chalmers University of Technology and University of Gothenburg!
I work on scientific #machinelearning problems taking a Bayesian #statistics and causal inference perspective and contribute to #JuliaLang.
| Website | https://sethaxen.com |
| GitHub | https://github.com/sethaxen |
| https://twitter.com/sethaxen |
Hello, I am a statistician at Chalmers University of Technology and University of Gothenburg!
I work on scientific #machinelearning problems taking a Bayesian #statistics and causal inference perspective and contribute to #JuliaLang.
@johnryan Yeah I do #probprog in #JuliaLang, and it's great that we can use arbitrary Julia code within our models. This is because most of the language is differentiable with #autodiff and code is composable, which is not the case for most PPLs.
For #deeplearning research, Julia could come in handy for writing and transforming custom kernels without fussing with CUDA, as some posts in that thread note, but I have no experience with this.
@johnryan This blog post reflects one (important) perspective. The community had a large discussion about this earlier this year: https://discourse.julialang.org/t/state-of-machine-learning-in-julia/74385
My take away is "it depends." If you're doing classical ML, Julia is usually fine. For cutting edge DL, it's generally not as mature as Python yet. For research/nonstandard ML, Julia shines.
I’ll offer a perspective from someone who (as a conscious choice) primarily uses Python over Julia. I work with, and maintain libraries for, all of PyTorch, JAX, and Julia. For context my answers will draw some parallels between: JAX, with Equinox for neural networks; Julia, with Flux for neural networks; as these actually feel remarkably similar. JAX and Julia are both based around jit-compilers; both ubiquitously perform program transforms via homoiconicity. Equinox and Flux both build mod...
In ArviZ.jl we store inference results (especially #MCMC draws) as InferenceData. It's built on DimensionalData, so we have multidimensional real arrays with named dimensions. Each array element is a marginal of a random draw, which is a useful format for plotting, #statistics, and diagnostics, but sometimes it's useful to get back to a structure more like what a PPL might emit.
Surprisingly, we can get pretty close with just 8 lines of code:
https://github.com/arviz-devs/InferenceObjects.jl/issues/27
Just so y'all know, the lead Mastodon guy does this for a hilariously small amount of money. OSS is something else
📢PARENTS📢
Make SURE you check your children's candy carefully this Halloween🍬
I just found HETEROSKEDASTICITY in this snickers bar😲😱