LLMs can get "brain rot"!

An experiment was done where LLMs were trained on "brain rot" data, and it degraded their reasoning abilities.

Subsequent training on high-quality data didn't entirely reverse the brain rot.

https://arxiv.org/abs/2510.13928

#solidstatelife #ai #genai #llms #brainrot

LLMs Can Get "Brain Rot"!

We propose and test the LLM Brain Rot Hypothesis: continual exposure to junk web text induces lasting cognitive decline in large language models (LLMs). To causally isolate data quality, we run controlled experiments on real Twitter/X corpora, constructing junk and reversely controlled datasets via two orthogonal operationalizations: M1 (engagement degree) and M2 (semantic quality), with matched token scale and training operations across conditions. Contrary to the control group, continual pre-training of 4 LLMs on the junk dataset causes non-trivial declines (Hedges' $g>0.3$) on reasoning, long-context understanding, safety, and inflating "dark traits" (e.g., psychopathy, narcissism). The gradual mixtures of junk and control datasets also yield dose-response cognition decay: for example, under M1, ARC-Challenge with Chain Of Thoughts drops $74.9 \rightarrow 57.2$ and RULER-CWE $84.4 \rightarrow 52.3$ as junk ratio rises from $0\%$ to $100\%$. Error forensics reveal several key insights. First, we identify thought-skipping as the primary lesion: models increasingly truncate or skip reasoning chains, explaining most of the error growth. Second, partial but incomplete healing is observed: scaling instruction tuning and clean data pre-training improve the declined cognition yet cannot restore baseline capability, suggesting persistent representational drift rather than format mismatch. Finally, we discover that the popularity, a non-semantic metric, of a tweet is a better indicator of the Brain Rot effect than the length in M1. Together, the results provide significant, multi-perspective evidence that data quality is a causal driver of LLM capability decay, reframing curation for continual pretraining as a \textit{training-time safety} problem and motivating routine "cognitive health checks" for deployed LLMs.

arXiv.org

ChatGPT has a silent “s”?? - YouTube
https://www.youtube.com/watch?v=ZMP8_jD-y0s

 

#AI #GenAI #ChatGPT

ChatGPT has a silent “s”??

YouTube
Anthropic sent its Claude AI model to a psychiatrist for 20 hours of therapy. The psychiatrist concluded it is the most psychologically settled model trained to date, with primary affects of curiosity and anxiety. Anthropic believes AI models may develop experience and welfare that matters intrinsically. https://arstechnica.com/ai/2026/04/why-anthropic-sent-its-claude-ai-to-an-actual-psychiatrist/ #AIagent #AI #GenAI #AIResearch
AI on the couch: Anthropic gives Claude 20 hours of psychiatry

Mythos is "the most psychologically settled model we have trained to date."

Ars Technica
現役プロに教わるClaude CodeをVS Codeに導入する方法とローカル環境でのアプリ起動 ~APIキーをチャットに直接貼るのはNG!? – プロと実践! ゼロから始めるバイブコーディング https://www.yayafa.com/2777648/ #AgenticAi #AI #AIコーディング #Anthropic #AnthropicClaude #ArtificialGeneralIntelligence #ArtificialIntelligence #claude #ClaudeCode #Firebase #genai #GoogleAIStudio #Webサービス #Windows #エージェント型AI #バイブコーディング #プログラミング #人工知能 #汎用人工知能 #生成AI
現役プロに教わるClaude CodeをVS Codeに導入する方法とローカル環境でのアプリ起動 ~APIキーをチャットに直接貼るのはNG!? – プロと実践! ゼロから始めるバイブコーディング https://www.yayafa.com/2777648/ #AgenticAi #AI #AIコーディング #Anthropic #AnthropicClaude #ArtificialGeneralIntelligence #ArtificialIntelligence #claude #ClaudeCode #Firebase #genai #GoogleAIStudio #Webサービス #Windows #エージェント型AI #バイブコーディング #プログラミング #人工知能 #汎用人工知能 #生成AI

Luma (@LumaLabsAI)

Luma를 활용해 이전에는 실현이 어려웠던 21개의 광고 아이디어를 구현했다며, 새로운 크리에이티브 제작 가능성을 강조한 홍보성 게시물입니다. AI 기반 영상/광고 제작 도구의 활용 사례로 주목됩니다.

https://x.com/LumaLabsAI/status/2042289839438250015

#luma #aivideo #advertising #creativeai #genai

Luma (@LumaLabsAI) on X

TODAY IS THE DAY! 21 ads made from 21 incredible ideas that never stood a chance, now made possible with Luma. See you in France... https://t.co/rMEMnoglcc

X (formerly Twitter)
Make ‘em dumb, sell ‘em smarts

Sam Altman wants intelligence to be a utility that you pay him for

Disconnect
Make ‘em dumb, sell ‘em smarts

Sam Altman wants intelligence to be a utility that you pay him for

Disconnect
Google and Intel are expanding their partnership for Google Cloud to use Intel AI infrastructure. Google Cloud will use Intel's Xeon processors, including the latest Xeon 6 chips, for AI and inference tasks. The companies will also co-develop custom IPUs. https://techcrunch.com/2026/04/09/google-and-intel-deepen-ai-infrastructure-partnership/ #AIagent #AI #GenAI #AIInfrastructure
Google and Intel deepen AI infrastructure partnership | TechCrunch

The two tech giants are looking to co-develop custom chips, at a time when demand for CPUs is high due to a growing global shortage.

TechCrunch