Gunnar Rätsch

415 Followers
62 Following
13 Posts
I love large biomedical data.
Twitter @[email protected]

RT @[email protected]

We are looking forward to your submissions and hope to encourage a lively discussion on time series representation learning in the context of medical applications!

#TSRL4H at ICLR'23 (@[email protected]) https://twitter.com/tsrl4h_workshop/status/1605544877176983552

🐦🔗: https://twitter.com/dr_amarx/status/1605560832288440320

Time Series Representation Learning 4 Health @ICLR on Twitter

“Interested in representation learning from time series data + medical applications? We are happy to announce the 1st hybrid workshop on time series representation learning for health #TSRL4H at ICLR'23 (@iclr_conf)! Website: https://t.co/pSMQBGZx52”

Twitter

RT @[email protected]

Would you like to help build the first ELLIS Institute? The call for Endowed Fellowships is live: https://ellis.eu/PI2023 #ELLISforEurope #Tuebingen #AI @[email protected]

🐦🔗: https://twitter.com/bschoelkopf/status/1604166461726916609

6 Principal Investigators (m/f/d) as Hector Endowed ELLIS Fellows in Tübingen

The ELLIS mission is to create a diverse European network that promotes research excellence and advances breakthroughs in AI, as well as a pan-European PhD program to educate the next generation of AI researchers. ELLIS also aims to boost economic growth in Europe by leveraging AI technologies.

European Lab for Learning & Intelligent Systems

RT @[email protected]

Today, visit our poster on:
"Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations"
#NeurIPS2022
⏲️ Poster session 3. Wed, 11am-1pm
📍 Hall J #428
https://arxiv.org/abs/2202.10638

w/ @[email protected], @[email protected], @[email protected], @[email protected]

https://twitter.com/tychovdo/status/1580955370813788163

🐦🔗: https://twitter.com/tychovdo/status/1597978035206393856

Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations

Data augmentation is commonly applied to improve performance of deep learning by enforcing the knowledge that certain transformations on the input preserve the output. Currently, the data augmentation parameters are chosen by human effort and costly cross-validation, which makes it cumbersome to apply to new datasets. We develop a convenient gradient-based method for selecting the data augmentation without validation data during training of a deep neural network. Our approach relies on phrasing data augmentation as an invariance in the prior distribution on the functions of a neural network, which allows us to learn it using Bayesian model selection. This has been shown to work in Gaussian processes, but not yet for deep neural networks. We propose a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective, which can be optimised without human supervision or validation data. We show that our method can successfully recover invariances present in the data, and that this improves generalisation and data efficiency on image datasets.

arXiv.org
Assistant Professors (Tenure Track) of Computer Science – Visualization

If NIH would prohibit publishing in journals that charge more than X for open access, this would be effective. I mean if the journal charges anybody high fees.

RT @[email protected]

@[email protected] I seriously think something needs to be done about these open access fees. They are crazy high and I don't think I have seen a numerical justification for these numbers, especially when nobody really cares about the paper printed version of a journal article any more.

🐦🔗: https://twitter.com/anshulkundaje/status/1591210952686718976

Anshul Kundaje ([email protected]) on Twitter

“@gxr I seriously think something needs to be done about these open access fees. They are crazy high and I don't think I have seen a numerical justification for these numbers, especially when nobody really cares about the paper printed version of a journal article any more.”

Twitter

RT @[email protected]

I was curious about how one could possibly use machine learning to improve metagenomic assembly, but Olga Mineeva is describing a very clever idea about how to do this at #biodata22 right now @[email protected]

🐦🔗: https://twitter.com/StevenSalzberg1/status/1591103335830278144

Steven Salzberg 💙💛 on Twitter

“I was curious about how one could possibly use machine learning to improve metagenomic assembly, but Olga Mineeva is describing a very clever idea about how to do this at #biodata22 right now @gxr”

Twitter

RT @[email protected]

Back to sc perturbations with @[email protected] on using neural optimal transport to study cell responses, given the challenge that we never can see a cell with and without perturbation. Work with @[email protected] at ETH Zurich #biodata22

🐦🔗: https://twitter.com/mikelove/status/1590737804832931840

“Michael” on Twitter

“Back to sc perturbations with @stefangstark on using neural optimal transport to study cell responses, given the challenge that we never can see a cell with and without perturbation. Work with @gxr at ETH Zurich #biodata22”

Twitter
Nice to be back at CSHL for #biodata22. Looking forward to the program…!
I'm looking for a software engineer to help our group on biomedical informatics (http://bmi.inf.ethz.ch) to develop better scientific software. We have exciting projects to be involved in. Please check out the ad and ping me if you have questions: https://lnkd.in/eNZWXumz.