🧠 New preprint by Fabian A. Mikulasch & @fzenke: Understanding Self-Supervised #Learning via #LatentDistribution Matching proposes a unifying theoretical framework for #SelfSupervisedLearning.

The paper reframes #SSL as latent distribution matching, connecting contrastive, non-contrastive, predictive, and stop-gradient methods through a common probabilistic principle linking alignment, uniformity, and latent entropy.

📝 https://arxiv.org/abs/2605.03517

#MachineLearning #RepresentationLearning #AI

Công trình đạt giải Best Paper tại NeurIPS bởi Kevin Wang (Princeton) giới thiệu mạng thần kinh 1000 lớp ứng dụng trong học tăng cường tự giám sát. Kỹ thuật mới cải thiện khả năng học biểu diễn sâu mà không cần dữ liệu nhãn, mở đường cho hệ thống AI hiệu quả và tự chủ hơn. #NeurIPS #AI #MachineLearning #HọcTăngCường #TríTuệNhânTạo #SelfSupervisedLearning #DeepLearning

https://www.reddit.com/r/singularity/comments/1q593by/neurips_best_paper_1000_layer_networks_for/

Transformer models and neural networks have transformed AI learning from manual programming to dynamic intelligence. With reinforcement tuning and huge datasets, machines can now improve autonomously at scale.

See the complete breakdown: https://www.osiztechnologies.com/generative-ai-development-company

#AI #GenerativeAI #GenerativeAIDevelopment #TechInnovation #MachineLearningAI #SelfSupervisedLearning #NeuralComputing #TransformerTech #AITraining #DeepLearningModels #AICapabilities #DataScienceTools #AIProgress #FutureOfAI

🎥🤖 Watch as #AI visionary Yann LeCun tries to unlock the secrets of the universe using self-supervised learning, while we pretend to understand anything beyond "AI good." 🚀🌐 Spoiler alert: by 2025, we'll still be watching cat videos. 😂📺
https://www.youtube.com/watch?v=yUmDRxV0krg #YannLeCun #SelfSupervisedLearning #Technology #CatVideos #Future2025 #HackerNews #ngated
Yann LeCun | Self-Supervised Learning, JEPA, World Models, and the future of AI

YouTube

Self-supervised learning, JEPA, world models, and the future of AI [video]

https://www.youtube.com/watch?v=yUmDRxV0krg

#HackerNews #SelfSupervisedLearning #JEPA #WorldModels #FutureOfAI #AIResearch

Yann LeCun | Self-Supervised Learning, JEPA, World Models, and the future of AI

YouTube

Read Meta's V-JEPA 2 paper: a self-supervised vision model scaling from 2M to 22M pretraining videos.

All that effort for just +1% in accuracy. But in ML, every percent counts.

That’s the price of progress when the low-hanging fruit is gone: we’re now chasing the long tail of rare edge cases. One more percent could be what makes a model truly reliable.

https://arxiv.org/html/2506.09985v1

#ML #AI #SelfSupervisedLearning

V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning

Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture

This paper demonstrates an approach for learning highly semantic image representations without relying on hand-crafted data-augmentations. We introduce the Image-based Joint-Embedding Predictive Architecture (I-JEPA), a non-generative approach for self-supervised learning from images. The idea behind I-JEPA is simple: from a single context block, predict the representations of various target blocks in the same image. A core design choice to guide I-JEPA towards producing semantic representations is the masking strategy; specifically, it is crucial to (a) sample target blocks with sufficiently large scale (semantic), and to (b) use a sufficiently informative (spatially distributed) context block. Empirically, when combined with Vision Transformers, we find I-JEPA to be highly scalable. For instance, we train a ViT-Huge/14 on ImageNet using 16 A100 GPUs in under 72 hours to achieve strong downstream performance across a wide range of tasks, from linear classification to object counting and depth prediction.

arXiv.org
Using Artificial Intelligence to Map the Earth’s Forests - Meta Sustainability

An open source, global canopy height dataset and a foundational AI model for a more accountable carbon market.

Meta Sustainability
@joss Happy to share this mini-paper and library I'm co-authoring.
Thanks to Federico, Andrea, Paolo and Manfredo, unfortunately none of them here (yet).
We're working on #deeplearning applications to #neuroscience and #EEG is very different from the data Big Tech usually approaches, so results and models are quite different there...
But we believe #selfsupervisedlearning is a great idea and we'd like for researchers to come play with it 👨🏾‍💻🧠