Công trình đạt giải Best Paper tại NeurIPS bởi Kevin Wang (Princeton) giới thiệu mạng thần kinh 1000 lớp ứng dụng trong học tăng cường tự giám sát. Kỹ thuật mới cải thiện khả năng học biểu diễn sâu mà không cần dữ liệu nhãn, mở đường cho hệ thống AI hiệu quả và tự chủ hơn. #NeurIPS #AI #MachineLearning #HọcTăngCường #TríTuệNhânTạo #SelfSupervisedLearning #DeepLearning

https://www.reddit.com/r/singularity/comments/1q593by/neurips_best_paper_1000_layer_networks_for/

Transformer models and neural networks have transformed AI learning from manual programming to dynamic intelligence. With reinforcement tuning and huge datasets, machines can now improve autonomously at scale.

See the complete breakdown: https://www.osiztechnologies.com/generative-ai-development-company

#AI #GenerativeAI #GenerativeAIDevelopment #TechInnovation #MachineLearningAI #SelfSupervisedLearning #NeuralComputing #TransformerTech #AITraining #DeepLearningModels #AICapabilities #DataScienceTools #AIProgress #FutureOfAI

🎥🤖 Watch as #AI visionary Yann LeCun tries to unlock the secrets of the universe using self-supervised learning, while we pretend to understand anything beyond "AI good." 🚀🌐 Spoiler alert: by 2025, we'll still be watching cat videos. 😂📺
https://www.youtube.com/watch?v=yUmDRxV0krg #YannLeCun #SelfSupervisedLearning #Technology #CatVideos #Future2025 #HackerNews #ngated
Yann LeCun | Self-Supervised Learning, JEPA, World Models, and the future of AI

YouTube

Self-supervised learning, JEPA, world models, and the future of AI [video]

https://www.youtube.com/watch?v=yUmDRxV0krg

#HackerNews #SelfSupervisedLearning #JEPA #WorldModels #FutureOfAI #AIResearch

Yann LeCun | Self-Supervised Learning, JEPA, World Models, and the future of AI

YouTube

Read Meta's V-JEPA 2 paper: a self-supervised vision model scaling from 2M to 22M pretraining videos.

All that effort for just +1% in accuracy. But in ML, every percent counts.

That’s the price of progress when the low-hanging fruit is gone: we’re now chasing the long tail of rare edge cases. One more percent could be what makes a model truly reliable.

https://arxiv.org/html/2506.09985v1

#ML #AI #SelfSupervisedLearning

V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning

Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture

This paper demonstrates an approach for learning highly semantic image representations without relying on hand-crafted data-augmentations. We introduce the Image-based Joint-Embedding Predictive Architecture (I-JEPA), a non-generative approach for self-supervised learning from images. The idea behind I-JEPA is simple: from a single context block, predict the representations of various target blocks in the same image. A core design choice to guide I-JEPA towards producing semantic representations is the masking strategy; specifically, it is crucial to (a) sample target blocks with sufficiently large scale (semantic), and to (b) use a sufficiently informative (spatially distributed) context block. Empirically, when combined with Vision Transformers, we find I-JEPA to be highly scalable. For instance, we train a ViT-Huge/14 on ImageNet using 16 A100 GPUs in under 72 hours to achieve strong downstream performance across a wide range of tasks, from linear classification to object counting and depth prediction.

arXiv.org
Using Artificial Intelligence to Map the Earth’s Forests - Meta Sustainability

An open source, global canopy height dataset and a foundational AI model for a more accountable carbon market.

Meta Sustainability
@joss Happy to share this mini-paper and library I'm co-authoring.
Thanks to Federico, Andrea, Paolo and Manfredo, unfortunately none of them here (yet).
We're working on #deeplearning applications to #neuroscience and #EEG is very different from the data Big Tech usually approaches, so results and models are quite different there...
But we believe #selfsupervisedlearning is a great idea and we'd like for researchers to come play with it 👨🏾‍💻🧠

The preprint of our lab's library for #selfsupervisedlearning on #eeg data is out!
Check it at https://arxiv.org/abs/2401.05405

The repo (under review with the preprint for the amazing @joss ) is at https://github.com/MedMaxLab/selfEEG

If you want to try deep learning and EEG, if you have lots of data, but supervised learning is difficult or ineffective for your target task, you might want to experiment with self-supervised learning as popularized for #transformers and vision models!
Techniques such as MoCo, SimCLR are already implemented, and eeg augmentations can be used and further customized. If you do not know how to come up with architectures, don't worry! A model zoo is there 👨🏾‍💻🧠

SelfEEG: A Python library for Self-Supervised Learning in Electroencephalography

SelfEEG is an open-source Python library developed to assist researchers in conducting Self-Supervised Learning (SSL) experiments on electroencephalography (EEG) data. Its primary objective is to offer a user-friendly but highly customizable environment, enabling users to efficiently design and execute self-supervised learning tasks on EEG data. SelfEEG covers all the stages of a typical SSL pipeline, ranging from data import to model design and training. It includes modules specifically designed to: split data at various granularity levels (e.g., session-, subject-, or dataset-based splits); effectively manage data stored with different configurations (e.g., file extensions, data types) during mini-batch construction; provide a wide range of standard deep learning models, data augmentations and SSL baseline methods applied to EEG data. Most of the functionalities offered by selfEEG can be executed both on GPUs and CPUs, expanding its usability beyond the self-supervised learning area. Additionally, these functionalities can be employed for the analysis of other biomedical signals often coupled with EEGs, such as electromyography or electrocardiography data. These features make selfEEG a versatile deep learning tool for biomedical applications and a useful resource in SSL, one of the currently most active fields of Artificial Intelligence.

arXiv.org