No, You Shouldn't Let Your Kids Use ChatGPT. A thread. 🧵
1/16
No, You Shouldn't Let Your Kids Use ChatGPT. A thread. 🧵
1/16
We pretend that because the interface is clean and there’s no nicotine, no violence, no nudity, that it’s safe. It looks like a homework helper. A science fair assistant. A miracle of modern education.
That’s just marketing.
2/16
You wouldn’t let your child hang out unsupervised with a stranger - especially one who lies confidently, speaks with artificial authority, and occasionally invents facts.
3/16
But that’s what we’re doing when we let them talk to generative AI with no guardrails and no context. It looks smart. It feels friendly. It sounds right. That’s exactly what makes it dangerous.
4/16
We underestimate how deeply plastic the young mind is.
Kids don’t use tools; they internalize them.
5/16
Kids learn how to think by watching thinking happen. When you train on a language model, it doesn’t learn truth, it learns patterns. When a kid trains on a language model, the same thing happens. They start seeing speech as performance.
6/16
They start believing fluency equals wisdom. They mimic the mimicry.
7/16
We don’t give a five-year-old a credit card and say, “Good luck budgeting.” We don’t drop a 10-year-old into Times Square at midnight and call it a field trip.
8/16
We create buffers. We wait until they’ve got context, maturity, the ability to weigh signal from noise.
And even then, we supervise.
9/16
ChatGPT etc are powerful - and fundamentally misaligned with how kids learn to trust, reason, and discern.
10/16
These models shape the questions you ask next. They don’t reflect your thinking. They nudge it. Relentlessly.
11/16
I'm not trying to create a panic. This is a boundary. If you wouldn’t let your kid join Twitter, if you wouldn’t let them Google health symptoms unsupervised, don’t let them outsource cognition to a system you don’t understand.
12/16
Curiosity needs friction. Learning needs surprise. Wisdom needs mistakes. Models don’t offer that. They offer something faster, smoother, and emptier.
13/16
We can teach kids to use these tools with judgment, with context, with skepticism. But that starts with a pause. With an adult in the room. With a conversation about what these models are and what they’re not. It starts with treating intelligence as more than output.
14/16
Once you flatten knowledge into prediction, once you replace the actual road of learning with a shortcut that feels smarter than you are, you’ve done more harm than you know.
15/16
You’ve reshaped the map your kid is using to navigate the world.
You’ve said: here’s something that sounds like thinking.
Something easier than thinking.
Good luck un-ringing that bell.
16/16
Take the AI hype in context:
1. Joni Ernst "all going to die anyway" nihilism
2. Elon Musk's "empathy is for the weak" narratives.
Habituating your child to treating people like they treat an AI device.
Frank Herbert -- Dune
"The devices themselves condition the users to employ each other the way they employ machines."
@Daojoan @Npars01
Do NOT let your children anywhere near LLMs (so-called "AI")...
Here are a couple of reasons:
https://mastodon.social/@ekis/114613560446815567
and even more disturbing:
https://mastodon.social/@ekis/114613739460407851
@Daojoan @Npars01
And if that's not good enough, how about this...?
People have literally started to worship it. #doomed
"People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies
Self-styled prophets are claiming they have 'awakened' chatbots and accessed the secrets of the universe through ChatGPT"
https://archive.ph/eeesj#selection-1485.0-1485.61
AI is being funded by some of the worst people on the planet for nefarious purposes.
People who believe a fair wage is bad.
https://www.theguardian.com/us-news/2025/apr/07/trump-union-workers-rights
Billionaires seeking international Orwellian state surveillance.
https://www.nytimes.com/2025/05/30/technology/trump-palantir-data-americans.html
https://theintercept.com/2017/02/22/how-peter-thiels-palantir-helped-the-nsa-spy-on-the-whole-world/
https://www.axios.com/2018/04/20/peter-thiel-palantir-software-tracks-americans-silicon-valley
Fossil fuel interests willing to turn the planet into a cinder.
https://www.desmog.com/2025/04/22/ai-energy-demand-can-keep-fossil-fuels-alive-tech-backers-promise-worlds-two-biggest-oil-producers/
Tech moguls hyping up the tools of anti-democracy & war.
Elon: a sociopath lamenting on the nature of empathy is like a vegan talking about steak recipes.
Ernst: “we’re all gonna get fukt, but at least I’m getting screwed in the pleasant way and get paid for it”
@dalias @ErikJonker @Daojoan I wish you were right, but that's not what the data shows. Check out https://app.thestorygraph.com/books/0ae97ada-0a45-478a-a11a-ec8d01d688d7 and maybe it will change your mind.
To draw a parallel - do you think it's ok to market and sell cigarettes to kids? Do they have enough insight to know that it's cringe?
@serg @ErikJonker @Daojoan No, that kind of propaganda will not change my mind.
This generation is anxious because the olds gave them a fundamentally fucked up world and hoarded the resources they'd need to survive it. Not because they had access to information to learn about how fucked up it is.
@serg @ErikJonker @Daojoan Regarding stuff that's actually harmful, like Facebook or AI, repeatedly the perverse "think of the children" response is to ban children from participating in life, rather than to ban the harmful thing, so that abusers can keep exploiting adults.
Ban it for everyone.
We are all of us a product of our environment. Cannot imagine what kind of dysfunction might result for a child growing up thinking their ChatGPT sessions are legitimate sources for role modelling.
It's like what Fox News did to our grandparents.
@Daojoan
The men in charge of these llm megaprojects are ivy league educated, lifelong readers of the science fiction of the last century - they read the stories predicting exactly what we see happening, and exactly what is being explained here.
What if that *is* the point? If alongside profit for themselves, the reason LLMs must replace anything they can as fast as they can is to hasten the atrophy of critical thinking, replacing it with a dependance on - who other - big tech oligarchs.
They also read #yarvin.
LLMs will some day be remembered, if they are remembered at all, as the lead paint/leaded fuel/DDT of computing.