No, You Shouldn't Let Your Kids Use ChatGPT. A thread. đź§µ
1/16
No, You Shouldn't Let Your Kids Use ChatGPT. A thread. đź§µ
1/16
We pretend that because the interface is clean and there’s no nicotine, no violence, no nudity, that it’s safe. It looks like a homework helper. A science fair assistant. A miracle of modern education.
That’s just marketing.
2/16
You wouldn’t let your child hang out unsupervised with a stranger - especially one who lies confidently, speaks with artificial authority, and occasionally invents facts.
3/16
But that’s what we’re doing when we let them talk to generative AI with no guardrails and no context. It looks smart. It feels friendly. It sounds right. That’s exactly what makes it dangerous.
4/16
We underestimate how deeply plastic the young mind is.
Kids don’t use tools; they internalize them.
5/16
Kids learn how to think by watching thinking happen. When you train on a language model, it doesn’t learn truth, it learns patterns. When a kid trains on a language model, the same thing happens. They start seeing speech as performance.
6/16
They start believing fluency equals wisdom. They mimic the mimicry.
7/16
We don’t give a five-year-old a credit card and say, “Good luck budgeting.” We don’t drop a 10-year-old into Times Square at midnight and call it a field trip.
8/16
We create buffers. We wait until they’ve got context, maturity, the ability to weigh signal from noise.
And even then, we supervise.
9/16
ChatGPT etc are powerful - and fundamentally misaligned with how kids learn to trust, reason, and discern.
10/16
These models shape the questions you ask next. They don’t reflect your thinking. They nudge it. Relentlessly.
11/16
I'm not trying to create a panic. This is a boundary. If you wouldn’t let your kid join Twitter, if you wouldn’t let them Google health symptoms unsupervised, don’t let them outsource cognition to a system you don’t understand.
12/16
Curiosity needs friction. Learning needs surprise. Wisdom needs mistakes. Models don’t offer that. They offer something faster, smoother, and emptier.
13/16
We can teach kids to use these tools with judgment, with context, with skepticism. But that starts with a pause. With an adult in the room. With a conversation about what these models are and what they’re not. It starts with treating intelligence as more than output.
14/16
Once you flatten knowledge into prediction, once you replace the actual road of learning with a shortcut that feels smarter than you are, you’ve done more harm than you know.
15/16
You’ve reshaped the map your kid is using to navigate the world.
You’ve said: here’s something that sounds like thinking.
Something easier than thinking.
Good luck un-ringing that bell.
16/16
Take the AI hype in context:
1. Joni Ernst "all going to die anyway" nihilism
2. Elon Musk's "empathy is for the weak" narratives.
Habituating your child to treating people like they treat an AI device.
Frank Herbert -- Dune
"The devices themselves condition the users to employ each other the way they employ machines."
Elon: a sociopath lamenting on the nature of empathy is like a vegan talking about steak recipes.
Ernst: “we’re all gonna get fukt, but at least I’m getting screwed in the pleasant way and get paid for it”