🎉🤖 "Behold, the future of AI: #Discrete #Distribution Networks! 🤯 Now accepted at ICLR 2025! 🎓✨ Watch in awe as GIF animations show off density estimations in shapes like spirals and QR codes, because that’s exactly what the world needs more of. 🎆🔍 Meanwhile, the masses eagerly await the life-altering impacts of blurcircles. 🙄🔄"
https://discrete-distribution-networks.github.io/ #AI #Networks #ICLR2025 #DensityEstimations #FutureOfAI #HackerNews #ngated
DDN: Discrete Distribution Networks

Novel Generative Model with Simple Principles and Unique Properties

The Department of Statistics at the University of Warwick continues to make waves in AI research with our recent contributions at #ICLR2025! Professor Giovanni Montana led our team's presentations with two papers on offline Reinforcement Learning:
1/ Controlling LLMs with steering vectors is unreliable, but why?  Our paper, "Understanding (Un)Reliability of Steering Vectors in Language Models," at the #ICLR2025 Workshop on Foundation Models in the Wild investigates this! What did we find?
The Data Science and Machine Learning Unit at OIST is working to improve the algorithms of the information age and how we interact with AI💻⚙ They had 5 papers – 4 written by interns – accepted at the recent #ICLR2025. Learn more about the unit 👇 www.oist.jp/news-center/...

Learning at peak efficiency
Bluesky

Bluesky Social
#ICLR2025 recap' 🇸🇬
Great time at ICLR2025 for our team members Stéphane RIVAUD and Song Duong, who presented their work during the poster sessions!
➡️ SCOPE: A Self-supervised Framework for Improving Faithfulness in Conditional Text Generation https://iclr.cc/virtual/2025/poster/28981
➡️ PETRA: Parallel End-to-end Training with Reversible Architectures
https://iclr.cc/virtual/2025/poster/31242
➡️ Learning a Neural Solver for Parametric PDEs to Enhance Physics-Informed Methods https://iclr.cc/virtual/2025/poster/28615
ICLR Poster SCOPE: A Self-supervised Framework for Improving Faithfulness in Conditional Text Generation

An Illustrated Guide to Automatic Sparse Differentiation | ICLR Blogposts 2025

In numerous applications of machine learning, Hessians and Jacobians exhibit sparsity, a property that can be leveraged to vastly accelerate their computation. While the usage of automatic differentiation in machine learning is ubiquitous, automatic sparse differentiation (ASD) remains largely unknown. This post introduces ASD, explaining its key components and their roles in the computation of both sparse Jacobians and Hessians. We conclude with a practical demonstration showcasing the performance benefits of ASD.

Friends met at #ICLR2025 and then #AABI2025 😊
Bluesky

Bluesky Social
There is a Learning Theory Day on Tuesday, April 29 (the day after #ICLR2025) at NTU in Singapore. Featuring a great lineup of speakers! sites.google.com/view/learnin...
Bluesky

Bluesky Social
At #ICLR2025 main conference! (with updated information)
Several papers from our institute (AC group) have been accepted at top conferences! 🎉 Our colleagues are attending—feel free to reach out if you’d like to connect! #ICLR2025 #WWW2025 #NAACL2025 #AI #NLP #ML
Bluesky

Bluesky Social