Guardrails are a scam!

“We have also continued to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

It's impossible cos that's full blown cognition: either the lie is the automation (instead indentured labour is used) or they're fully fibbing.

1/

https://www.theguardian.com/society/2026/mar/31/teenager-asked-chatgpt-most-successful-ways-take-life-inquest-told

Teenager died after asking ChatGPT for ‘most successful’ way to take his life, inquest told

Luca Cella Walker asked chatbot for best way for someone to kill themself on railway line before his death

The Guardian

Don't trust these companies.

Not really anything but marketing.

Good intentions would shut the bots down after first death! Also it's impossible to automate.

AI & any concept relating to it like so-called guardrails are a scam in the deepest sense like a perpetual motion machine or a ouija board — and not only a scam like a pyramid scheme which is a possible way to make money if you are first in first out.

2/

Guardrails are a scam. It's not that they could work, it's that they cannot and never will for such models. By design.

The model is actually designed to output fragments of the input, so-called training data.

Non-industry compromised scientists keep saying these models don't become safe, no matter what, but people keep thinking just because the concept of guardrails is mentioned it must work. By definition, it doesn't. This is not something open to discussion, unless you're a paid shill.

3/

Don't let anybody you care about use a chatbot, especially like this as a friend, without kind and well-intentioned conversations to help them move to never using it!

Cool pincer movement if you truly grasp:

AI & any concept relating to it like so-called guardrails are a scam in the deepest sense like a perpetual motion machine or a quija board — and not only a scam like a pyramid scheme which is a possible way to make money if you are first in first out.

4/

As I say here:

> models give unsafe responses because that is not what they are designed to avoid. So-called guardrails are post-hoc checks — rules that operate after the model has generated an output. If a response isn't caught by these rules, it will slip through

https://www.forbes.com/sites/weskilgore/2025/08/01/can-we-build-ai-therapy-chatbots-that-help-without-harming-people/

5/

Can We Build AI Therapy Chatbots That Help Without Harming People?

AI mental health chatbots promise affordable and immediate support—but can they be trusted? This Forbes report explores the risks, ethics, and future of therapy bots.

Forbes

Perhaps counterintuitive but guardrails are full-blown cognition in the case of models that contain data from the web which obviously also contains inappropriate content. Only human cognition at that point can sort this data into appropriate for a child or not.

https://www.ru.nl/en/research/research-news/dont-believe-the-hype-agi-is-far-from-inevitable

@Iris

6/

Don’t believe the hype: AGI is far from inevitable | Radboud University

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown.

@olivia Here's the rub.

Humans cannot do this either. And in fact a lot of adult decisions around "appropriate for child or not" are DEEPLY HARMFUL.