Guardrails are a scam!

“We have also continued to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

It's impossible cos that's full blown cognition: either the lie is the automation (instead indentured labour is used) or they're fully fibbing.

1/

https://www.theguardian.com/society/2026/mar/31/teenager-asked-chatgpt-most-successful-ways-take-life-inquest-told

Teenager died after asking ChatGPT for ‘most successful’ way to take his life, inquest told

Luca Cella Walker asked chatbot for best way for someone to kill themself on railway line before his death

The Guardian

Don't trust these companies.

Not really anything but marketing.

Good intentions would shut the bots down after first death! Also it's impossible to automate.

AI & any concept relating to it like so-called guardrails are a scam in the deepest sense like a perpetual motion machine or a ouija board — and not only a scam like a pyramid scheme which is a possible way to make money if you are first in first out.

2/

Guardrails are a scam. It's not that they could work, it's that they cannot and never will for such models. By design.

The model is actually designed to output fragments of the input, so-called training data.

Non-industry compromised scientists keep saying these models don't become safe, no matter what, but people keep thinking just because the concept of guardrails is mentioned it must work. By definition, it doesn't. This is not something open to discussion, unless you're a paid shill.

3/

@olivia The fragments are about 3/4 of a word, for what it's worth.