BRAINROT!

YouTube

I am losing faith in the societal utility of participating in social media. Whether decentralized or centralized. Whether profit-motivated or not-for-profit.

And I do not like my losing this faith.

#fzThinkingOutLoud #BrainRot #DeadInternetTheory #RageBaitEconomy #ThisIsWhyWeCantHaveNiceThings

Unicef alerta para risco do uso de inteligência artificial por crianças e adolescentes

Pesquisadores da USP ressaltam os possíveis efeitos prejudiciais da IA e destacam aspectos como sobrecarga mental e educação informativa

Jornal da USP
LLMs Can Get "Brain Rot": A Pilot Study on Twitter/X

We propose and test the LLM Brain Rot Hypothesis: continual exposure to junk web text induces lasting cognitive decline in large language models (LLMs). To unveil junk effects, we designed a novel controlled experiment on real Twitter/X corpora, by constructing junk and reverse-controlled datasets via two orthogonal operationalizations: M1 (engagement degree) and M2 (semantic quality), with matched token scale and training operations across conditions. Compared to the control group, continual pre-training of 4 LLMs on the junk dataset causes non-trivial declines (Hedges' g>0.3) on reasoning, long-context understanding, safety, and inflating "dark traits" (e.g., psychopathy, narcissism). The gradual mixtures of junk and control datasets also yield dose-response cognition decay: for example, under M1, ARC-Challenge with Chain-of-Thought drops 72.1 -> 57.2 and RULER-CWE 83.7 -> 52.3 as junk ratio rises from 0% to 100%. Error forensics reveal several key insights. First, we identify thought-skipping as the primary lesion in reasoning: models increasingly truncate or skip chains. Second, partial but incomplete healing is observed: scaling instruction tuning and clean continual pre-training improve the declined cognition, yet cannot restore baseline capability, suggesting persistent representational drift rather than format mismatch. Finally, we discover that the popularity, a non-semantic metric, of a tweet is a better indicator of the Brain Rot effect than the length in M1. Together, the results provide significant, multi-perspective evidence that social effects of data could be a causal driver of LLM capability decay in continual pre-training, thereby motivating routine "cognitive health checks" for deployed and evolving LLMs.

arXiv.org

"This is why the disinformation frame keeps failing. #Disinformation assumes a sender, a message, a deception, a corrigible subject. Fact-checking assumes wrong information meeting a willing reader. None of that fits #brainrot."

https://tiktoktiktoktiktok.substack.com/p/brainrot-as-anti-content

Brainrot as Anti-Content

“It’s the only way to be free online”

Understanding TikTok