A MIT study using EEGs of the brain, linguistic analysis, and post-task interviews found that using ChatGPT weakened participants’ neural connectivity, memory, and sense of ownership over their writing. #cognitivedebt https://arxiv.org/abs/2506.08872
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

arXiv.org

A good podcast that raises red flags about that MIT Media Lab paper

I felt a little sheepish suggesting that the writing in the Media Lab paper about “cognitive debt” and ChatGPT needed some work. Ashley Juavinett, Professor of Neurobiology at UC San Diego, and psychologist Cat Hicks have no such qualms. Their podcast, “You Deserve Better Brain Research,” addresses some serious problems with this “weird document,” from the writing to methods and research design. I’m putting it up here because I enjoyed and learned from it, and I hope others will, too.

https://open.spotify.com/episode/0XLGvUjtmrdEtHVaYUBo5X

Type your email…

Subscribe

#artificialIntelligence #cognitiveDebt #dialogue #humanEncounter #LLMs #sharedCommitment

You deserve better brain research

Change, Technically · Episode

Spotify
Related to my last post, some warnings and advices about using genAI (LLM). Loving these:
- "You may be trading productivity today for dumbasses in the future."
- "If you're a worker, know how to leverage AI but don't lean on it too much."
https://www.constellationr.com/blog-news/insights/what-genai-cognitive-debt-will-mean-enterprises-and-future-workforce
#genAI #LLM #CognitiveDebt #CriticalThinking
What genAI, cognitive debt will mean for enterprises and future workforce

Generative AI has been seen as a boon for productivity, but it may not be making the workforce any smarter. In fact, enterprises may want to start thinking about cognitive debt from AI usage and a thin bench of critical thinkers. A study (abstract) from a team at MIT looked at 54 participants using OpenAI's ChatGPT for essays. The participants were divided into brain-only users, search engine users and large language model (LLM) users. The study then used electroencephalography (EEG) to assess cognitive load during essay writing and scored the essays.

Constellation Research Inc.
Very clarifying article about stochastic parrots and the problem of fulfilling capital expectations at any cost.
https://www.crikey.com.au/2025/06/23/inaturalist-google-partnership-artificial-intelligence-ai-big-tech/
#genAI #StochasticParrots #CognitiveDebt #Environment #Ethics
AI is zombifying our brains. The iNaturalist backlash shows we can fight back

The infestation of generative systems in education, medicine and academia should all be cause for much, much more alarm. 

Crikey

A term to remember:
Cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.

We shouldn't worry only of the technical debt, but also cognitive debt when over-relying on the LLMs.
#technicaldebt
#cognitivedebt

Read the conclusion of the recent Media Lab paper about LLMs. It’s a Non-Friction Nightmare.

No, that’s not a typo in my title.

I’ve just had my first look at the MIT Media Lab paper that is making the rounds: “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” 

This paper is disturbing, to say the least. What the authors call “friction” is what we used to call thinking, or it’s at least an essential element of thinking, the effort of it. That effort includes the give and take of inquiry, the difficulty of dialogue, the sweat of education, the work of human language and human encounter. 

The paper’s conclusion only scratches the surface of this problem when it addresses “ethical considerations.”

Consider what is probably the most alarming sentence here, which describes what happens when you reduce friction: people reach the conclusions the algorithm wants them to reach – or, rather, the algorithm reaches conclusions for them; people reach for nothing at all.

It’s surrender.  Not just to machines, mind you, not just to the algorithm, but also to the interests (“the priorities”) the algorithm represents.

By surrendering to the these priorities, allowing ourselves to be guided by them, we’re also throwing in the towel on shared human experience, co-coordination and mutual guidance, reliance on each other and shared commitment — which is the only way we can work out our own priorities.

Finally, I can’t post this on my blog (a little center of friction in its own right) without saying something about the writing here.

I know this is a draft paper, but this conclusion sure could use another going-over. It’s not just the typo in the penultimate paragraph (“theis” instead of “their”) that needs correcting; there’s also that awkward bit about “net positive for the humans” in the final paragraph (which sounds like it came straight from an LLM) and the resort to cliche (“technological crossroads”) and industry jargon (“unprecedented opportunities for enhancing learning and information access”). The findings here deserve more clarity.

Last, I’d like to see a little more about the social and political consequences that would seem to follow inevitably from the “cognitive consequences” the authors document. But maybe that’s a matter for another paper.   

As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with LLM integration in educational and informational contexts. While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking, and intellectual independence demands a very careful consideration and continued research. 

The LLM undeniably reduced the friction involved in answering participants’ questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users’ inclination to critically evaluate the LLM’s output or ”opinions” (probabilistic answers based on the training datasets). This highlights a concerning evolution of the ‘echo chamber’ effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM’s shareholders…. 

Only a few participants in the interviews mentioned that they did not follow the “thinking” [124] aspect of the LLMs and pursued their line of ideation and thinking. 

Regarding ethical considerations, participants who were in the Brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to other groups. Essays written with the help of LLM carried a lesser significance or value to the participants (impaired ownership, Figure 8), as they spent less time on writing (Figure 33), and mostly failed to provide a quote from theis [sic] essays (Session 1, Figure 6, Figure 7). 

Human teachers “closed the loop” by detecting the LLM-generated essays, as they recognized the conventional structure and homogeneity of the delivered points for each essay within the topic and group. 

We believe that the longitudinal studies are needed in order to understand the long-term impact of the LLMs on the human brain, before LLMs are recognized as something that is net positive for the humans.

Type your email…

Subscribe

#artificialIntelligence #cognitiveDebt #dialogue #friction #humanEncounter #language #LLMs #resistance #sharedCommitment

The Decoder: "A new MIT study suggests that using AI writing assistants like ChatGPT can lead to what researchers call "#cognitivedebt" - a state where outsourcing mental effort weakens learning and #criticalthinking. The findings raise important questions about how large language models (#LLM s) shape our brains and writing skills, especially in education."
https://the-decoder.com/mit-study-shows-cognitive-debt-through-chatgpt-heres-what-it-means-in-real-world-practice/
ChatGPT might be draining your brain, MIT warns - what ‘cognitive debt’ means for you

A new MIT study suggests that using AI writing assistants like ChatGPT can lead to what researchers call "cognitive debt" - a state where outsourcing mental effort weakens learning and critical thinking. The findings raise important questions about how large language models (LLMs) shape our brains and writing skills, especially in education.

THE DECODER
Fermat's Library | Electronics, Technology and Computer Science, 1940-1975: A Coevolution annotated/explained version.

Fermat's Library is a platform for illuminating academic papers.

Fermat's Library
🤣🤖 "News flash: Using #AI to write essays might make your brain more sluggish than a three-toed sloth in a hammock! 🦥📝 This groundbreaking discovery reveals that relying on #ChatGPT for essays is just another way to accumulate 'cognitive debt'—because who needs brain cells when you've got silicon ones, right? 🤯💡"
https://www.brainonllm.com/ #Writing #CognitiveDebt #SluggishBrain #Innovation #HackerNews #ngated
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task