Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant https://www.media.mit.edu/publications/your-brain-on-chatgpt/

New brain rot successfully unlocked by using ChatGPT and other similar AI tools. Over many millennia evolution helped you develop critical thinking skills and you are giving that all away chasing some fools' dreams who think you can do better things in life without money or work. At the end of the day, you will have neither.

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task โ€“ MIT Media Lab

This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and โ€ฆ

MIT Media Lab

@nixCraft I wonder how this will impact in the future. In recent years every aspect of the modern life is leading to destroying our brains. This is just the latest one.

Will we find a path where these tools are useful and do not harm us? Will people become aware of brain rot and adopt healthier habits?

If this does harm to adult brains, I can't imagine what it does to children's developing brains ๐Ÿซค.

@nixCraft The fundamental problem here is that humanity can't decide without trying, regardless of whether it's right or wrong.

@nixCraft I would like to have such a study on programming. And then see if we can actually come up with a way to use LLMs in such a way that it's useful for the students.

I still think that there is a way we can use these things in a way which also helps students, specifically for programming. It's not easy, I didn't find anything which seems to work yet, but if somebody reads this and wants to hire me to do a study, I'm all yours :)

@ligasser @nixCraft METR seems pretty conclusive on the fact that literally everyone with positive or optimistic sentiment toward the use of LLMs are categorically wrong. Across all the fields they studied, everyone who said AI made them more productive, that is 100% of respondents, where actually slower when evaluated by objective metrics. I doubt what you are looking for exists.

@kawazoe @nixCraft The METR study says this at the end of their summary:

> Do these results say that AI isn't useful in software engineering?
> Noโ€”it seems plausible or likely that AI tools are useful in many other contexts different from our setting, for example, for less experienced developers, or for developers working in an unfamiliar codebase. See Appendix B for potential misreadings/overgeneralizations we do not endorse on the basis of our results.

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity

@kawazoe @nixCraft For example me being able to fix a ruby-on-rails code with claude code, and letting it churn while I do other things, is a plus. But I explicitly stated in my PR that this is LLM slop. Working LLM slop...

But there is definitely a "complexity ditch" to learn about, and understand when an LLM just blurts out nonsense very confidently.

@kawazoe @nixCraft And the one dev having more than 50h of Cursor experience was faster :) Which is of course not a statistical relevant outcome...
@ligasser @nixCraft The survey also takes that into consideration in its results. Something to keep in mind is that everyone of those people are "super senior" on their projects, and they still struggled to have the LLM produce acceptable code. It's not showing that more Cursor experience means it can turn out useful. It's showing that the more expertise you have on a topic, the less useful an LLM is to you overall.

This is probaly because it confidently makes critical mistakes all the time. Ironically, the less expertise you have, the less likely you are to catch those mistakes and end up with bad code. In other words, the last people who should end up with an LLM in their hands are the junior devs and students. On top of that, the more work you offload to an LLM, the less experience you gain on that topic, turning it into a multiplicative negative effect on your overall ability to grow in your job, or as a student.

@kawazoe @nixCraft I'll have to dig out that slide, from a security firm, which showed that when starting to use LLMs:
- there are more PRs
- more lines changed

but this effects fades after 2-3 months. What stays:

- more errors per lines of code!

Caveat: it was a security firm selling their product to detect these errors :)

@nixCraft Research shows that very young children fail to learn language properly if they spend too much time on electronic devices. They need face to face dialogues.
@[email protected] @[email protected] I can get behind this. Children should not be using AI chatbots, but the conversation becomes more nuanced with teenagers. As they prepare for a workforce where AI integration is inevitable, teens must develop the skills to be critical consumers while navigating potential harms.

While supervised classrooms provide a safe foundation, the reality of this developmental stage is that most mastery comes through unsupervised practice and play. Finding the right balance between necessary exploration and essential safeguarding will be a significant challenge for educators and parents alike.

@elaine @nixCraft Very well reasoned reply - thanks!

You are right - when crucial face to face development is pretty much over other independent paths must kick in.

Play, however, is often compromised by phone intrusion. And often prescribed also - look at meccano sets from the 50s and see how they could make many models - construction sets now often create one thing!

@[email protected] @[email protected] Since the 1980s, many parents have grown so fearful about kidsโ€™ safety that childhood got chauffeured and supervised: car rides from Aโ†’B, tightly monitored friend visits, and less unstructured play. โ€œHanging outโ€ becomes a long meal + a movie (or board games with adults)โ€”if it happens at all.

Too much phone/iPad time can be harmful. But forced isolation may be worse: screens become the only way kids can actually socialize.

@elaine @nixCraft Interesting - thanks.

In the UK we describe this as helicopter parenting. Too much control and prescription in the guise of protection. This is also very bad.

My happy childhood in the 50s and 60s was phone free. Unrestricted you learnt agency and independency. Used to walk to school from age 5 I believe. Family never had a car.

@[email protected] @[email protected] The world has changed a lot unfortunately.

I am blind so technology makes a huge difference for me. With screen readers, OCR, Image Description Software (many of these don't require internet), and Large Language Models that have the best OCR in the world and can pull just the the data I need is life changing. However I have low-tech devices too like special glasses that can zoom-in on things, digital magnifiers that don't require a smartphone or internet, etc. Specialized audiobook players and braille books are great and don't require internet access. I hope we can have a balance of tools available to the blind community. A cell phone while an essential tool to have, should not be the only option.

We need to keep making tools that are standalone, powered with replaceable batteries, and do not require the internet or a $25/month subscription to use them.

Helicopter parenting in the United States is pretty much the norm.

@elaine @nixCraft Thanks for all your words. It baffles me how someone can cope with being blind. I sometimes enjoy moving around the house in darkness. But that is a predictable environment!

Tech for blindeness invaluable!

You might imagine that there would be a movement to advise people about the insanity of prescriptive, over-protective parenting.

@elaine @nixCraft On the radio a dew years ago a couple were saying their son with a physical condition that made him fall over a lot meant they kept picking him up until a specialist observed and told them to let him pick himself up. He was now an adult and a result of such agency was that he now ran an International business!
@[email protected] @[email protected] that's great that he got the help he needed and was able to move forward.
@nixCraft AI isn't all doom and gloom. It does have its uses. However, humans resigning from knowledge on the merit of AI built upon human knowledge will soon become a snake eating its own tail. For without continued advancement of human knowledge AI opens the black hole on humanity.
@[email protected] @[email protected] As someone researching AI in my Industrial-Organizational Psychology grad program (and as a blind person), I've pored over studies from psych journals claiming "AI causes mental health issues." They almost always add: when used this way. AI isn't the problem how people use it is.

AI is brand new tech, yet there's a massive gap in AI literacy and skills training. Here's a real example: Upload 25 journal articles to an AI chatbot, plus details on your research question. It instantly flags the top 5 to read. This smartly combines traditional keyword search with AI-powered filtering.

For me, a single 15-page journal article takes ~2.5 hours to read via screen reader most have terrible markup, making skimming impossible. Sifting 25 articles? That's 75 hours of grueling, often irrelevant reading.

AI cuts that to 15 hours of focused reading on what matters. Crucially, I still read the originals directly (not AI summaries). I log into my university library, run keyword searches, scan titles, judge relevance, and critically evaluate every source. AI helps prioritize it doesn't replace research skills, critical thinking, or human judgment.

We need AI literacy taught in schools, colleges, universities, and workplaces not just "AI exists," but how to use it right to amplify productivity without the pitfalls. Let's bridge the gap before misinformation spreads.