Researchers have finally discovered that if you leave language models #unsupervised, they turn into unruly teenagers who refuse to clean their rooms or do anything useful. ๐Ÿค–๐Ÿงน Meanwhile, the Simons Foundation is still trying to figure out which member institutions actually support this academic circus. ๐ŸŽช๐ŸŽ“
https://arxiv.org/abs/2506.10139 #languagemodels #research #academiccircus #AIbehavior #HackerNews #ngated
Unsupervised Elicitation of Language Models

To steer pretrained language models for downstream tasks, today's post-training paradigm relies on humans to specify desired behaviors. However, for models with superhuman capabilities, it is difficult or impossible to get high-quality human supervision. To address this challenge, we introduce a new unsupervised algorithm, Internal Coherence Maximization (ICM), to fine-tune pretrained language models on their own generated labels, \emph{without external supervision}. On GSM8k-verification, TruthfulQA, and Alpaca reward modeling tasks, our method matches the performance of training on golden supervision and outperforms training on crowdsourced human supervision. On tasks where LMs' capabilities are strongly superhuman, our method can elicit those capabilities significantly better than training on human labels. Finally, we show that our method can improve the training of frontier LMs: we use our method to train an unsupervised reward model and use reinforcement learning to train a Claude 3.5 Haiku-based assistant. Both the reward model and the assistant outperform their human-supervised counterparts.

arXiv.org