Unsupervised Elicitation of Language Models

To steer pretrained language models for downstream tasks, today's post-training paradigm relies on humans to specify desired behaviors. However, for models with superhuman capabilities, it is difficult or impossible to get high-quality human supervision. To address this challenge, we introduce a new unsupervised algorithm, Internal Coherence Maximization (ICM), to fine-tune pretrained language models on their own generated labels, \emph{without external supervision}. On GSM8k-verification, TruthfulQA, and Alpaca reward modeling tasks, our method matches the performance of training on golden supervision and outperforms training on crowdsourced human supervision. On tasks where LMs' capabilities are strongly superhuman, our method can elicit those capabilities significantly better than training on human labels. Finally, we show that our method can improve the training of frontier LMs: we use our method to train an unsupervised reward model and use reinforcement learning to train a Claude 3.5 Haiku-based assistant. Both the reward model and the assistant outperform their human-supervised counterparts.

arXiv.org

Code is tool to make art…

Someone once said code is poetry. Another person said code is law. Neither of these are true. Code is a tool of art. Not every person who works with the tool is producing art. Not every person loves the work they are put towards. But it is possible to love to code. To code because you are an artist. To code art. To code because it is a joy and you are joyful.

It is possible to do all of those things and not code well too! It is possible to code for joy badly, create something that has no particular accomplishments in the wider world, or massive value, or “true craft”. You can write spaghetti code of the worst sort for a janky program. Art need not be good to be art. Notably, no one will ever get to good art without playing to find their way past making bad art. That’s as true with painting and woodcarving as it is of code.

Code is a joy

This is a beautiful idea. The rest of the post is written from a very strong point of view and I will leave it as an exercise to you on how to interpret that.

#ai #code #engineering #models #software

Code is a joy

Code only matters when it is made by people, with people and for people.

Aram ZS | Digital Garden
Autobauer #Tesla hat überarbeitete Versionen seines #ModelS und #ModelX vorgestellt. Die Neuerungen sind überschaubar, der #Preis steigt aber. Viele Fans sind enttäuscht. https://winfuture.de/news,151561.html?utm_source=Mastodon&utm_medium=ManualStatus&utm_campaign=SocialMedia
Tesla stellt neues Model S und Model X vor - kaum Neues, höhere Preise

Tesla hat seine Luxusmodelle Model S und Model X überarbeitet - allerdings mit einer recht überschaubaren Anzahl an Neuerungen. Die Änderungen beschränken sich hauptsächlich auf kleinere Details, während der Preis kräftig steigt.

WinFuture.de
Self-Adapting Language Models

Large language models (LLMs) are powerful but static; they lack mechanisms to adapt their weights in response to new tasks, knowledge, or examples. We introduce Self-Adapting LLMs (SEAL), a framework that enables LLMs to self-adapt by generating their own finetuning data and update directives. Given a new input, the model produces a self-edit-a generation that may restructure the information in different ways, specify optimization hyperparameters, or invoke tools for data augmentation and gradient-based updates. Through supervised finetuning (SFT), these self-edits result in persistent weight updates, enabling lasting adaptation. To train the model to produce effective self-edits, we use a reinforcement learning loop with the downstream performance of the updated model as the reward signal. Unlike prior approaches that rely on separate adaptation modules or auxiliary networks, SEAL directly uses the model's own generation to control its adaptation process. Experiments on knowledge incorporation and few-shot generalization show that SEAL is a promising step toward language models capable of self-directed adaptation. Our website and code is available at https://jyopari.github.io/posts/seal.

arXiv.org
Ao invés de criar novas gerações, Tesla apresenta pequeno facelift em modelos lançados em 2012 e 2015 https://www.noticiasautomotivas.com.br/ao-inves-de-criar-novas-geracoes-tesla-apresenta-pequeno-facelift-em-modelos-lancados-em-2012-e-2015/ #ModelS

https://gaypornsky.com/index.php/2025/06/12/ai-twinks-5/

AI Twinks - #Gay #Porn Sky

AI is now in our daily lives. #Twinks #lovers have jumped on the occasion to fullfill their #fantaisies with non existing #hot #beautiful #models. A delight for all senses and what a pity that they will never be true!

From my editorial "Catch Us If U Can".
Let me know if you want to see more from it.

#modelphotography #vintagecar #editorial #models #roadtrip #v8 #dirtroad #summer #outdoors #gasstation

https://gaypornsky.com/index.php/2025/06/09/casey-tanner-gifs/

Casey Tanner Gifs - #Gay #Porn Sky

Casey is one of the most fun and outgoing #models at #HelixStudios. The mischievous 21 year old from Ohio will brighten your day with his quick wit and big #beautiful #smile. You can find him out on the town go-go #dancing, making new #friends and checking out the local hot spots.