Guardrails are a scam!

“We have also continued to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

It's impossible cos that's full blown cognition: either the lie is the automation (instead indentured labour is used) or they're fully fibbing.

1/

https://www.theguardian.com/society/2026/mar/31/teenager-asked-chatgpt-most-successful-ways-take-life-inquest-told

Teenager died after asking ChatGPT for ‘most successful’ way to take his life, inquest told

Luca Cella Walker asked chatbot for best way for someone to kill themself on railway line before his death

The Guardian

Don't trust these companies.

Not really anything but marketing.

Good intentions would shut the bots down after first death! Also it's impossible to automate.

AI & any concept relating to it like so-called guardrails are a scam in the deepest sense like a perpetual motion machine or a ouija board — and not only a scam like a pyramid scheme which is a possible way to make money if you are first in first out.

2/

Guardrails are a scam. It's not that they could work, it's that they cannot and never will for such models. By design.

The model is actually designed to output fragments of the input, so-called training data.

Non-industry compromised scientists keep saying these models don't become safe, no matter what, but people keep thinking just because the concept of guardrails is mentioned it must work. By definition, it doesn't. This is not something open to discussion, unless you're a paid shill.

3/

Don't let anybody you care about use a chatbot, especially like this as a friend, without kind and well-intentioned conversations to help them move to never using it!

Cool pincer movement if you truly grasp:

AI & any concept relating to it like so-called guardrails are a scam in the deepest sense like a perpetual motion machine or a quija board — and not only a scam like a pyramid scheme which is a possible way to make money if you are first in first out.

4/

As I say here:

> models give unsafe responses because that is not what they are designed to avoid. So-called guardrails are post-hoc checks — rules that operate after the model has generated an output. If a response isn't caught by these rules, it will slip through

https://www.forbes.com/sites/weskilgore/2025/08/01/can-we-build-ai-therapy-chatbots-that-help-without-harming-people/

5/

Can We Build AI Therapy Chatbots That Help Without Harming People?

AI mental health chatbots promise affordable and immediate support—but can they be trusted? This Forbes report explores the risks, ethics, and future of therapy bots.

Forbes

Perhaps counterintuitive but guardrails are full-blown cognition in the case of models that contain data from the web which obviously also contains inappropriate content. Only human cognition at that point can sort this data into appropriate for a child or not.

https://www.ru.nl/en/research/research-news/dont-believe-the-hype-agi-is-far-from-inevitable

@Iris

6/

Don’t believe the hype: AGI is far from inevitable | Radboud University

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown.

As @Iris and I say:

> assume engineers have access to everything they might conceivably need, from perfect datasets to the most efficient machine learning methods. Even if we give the AGI-engineer every advantage, there is no method of achieving what big tech companies promise.

https://www.ru.nl/en/research/research-news/dont-believe-the-hype-agi-is-far-from-inevitable

7/

Don’t believe the hype: AGI is far from inevitable | Radboud University

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown.

It's so important to understand how evil this rhetorical move they are playing is... It's really just not possible to do what they claim.

And have we seen this before? Absolutely. And there's even more tricks in the book sadly...

AI seems to copy tobacco as well as petrochemical industry rhetorical moves. Be prepared.

https://olivia.science/before

8/

We've been here before!

Parallels between AI and tobacco, and other warnings.

https://olivia.science

Inevitably they will blame psychosis. And we've seen this before with companies and academics claiming lung cancer is caused by stress not smoking!

Remember Hans Eysenck? https://www.theguardian.com/science/2019/oct/11/work-of-renowned-uk-psychologist-hans-eysenck-ruled-unsafe

> This research programme has led to one of the worst scientific scandals of all time

9/

Work of renowned UK psychologist Hans Eysenck ruled ‘unsafe’

Eysenck’s ‘cancer-prone’ personality theory had come under criticism for decades

The Guardian

Long story short on relevant parts: tobacco industry jumped on "stress" to divert from cigs cause cancer, much like AI companies will inevitably do the same for psychosis or wtv to divert from the fact that they cause harm. No user is causing this.

As @Iris and I say: "Industry agendas – whether the industry is tobacco, petroleum, pharmaceuticals, or tech – rarely align with human welfare or disinterested research, especially when left unchecked and unregulated."

https://www.project-syndicate.org/commentary/ai-will-not-save-higher-education-but-may-destroy-it-by-olivia-guest-and-iris-van-rooij-2025-10

10/

AI Is Hollowing Out Higher Education

Olivia Guest & Iris van Rooij urge teachers and scholars to reject tools that commodify learning, deskill students, and promote illiteracy.

Project Syndicate

Never ever think companies care about human life about when they allow people who use their products to die & don't shut it down!

They have shown their hand. They don't care when users die. This licenses us being certain they are like tobacco / petroleum industry!

And remember! These people and companies in AI started destroying academia and ethical work and oversight well before the release of ChatGPT.

They bought as much as they could of AI ethics as a field:

https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/

11/

How Big Tech Manipulates Academia to Avoid Regulation

“AI ethics” is a field that barely existed before 2017. It’s become a Silicon Valley-led lobby to avoid legal restrictions of controversial technologies.

The Intercept

@olivia

> And remember! These people and companies in AI started destroying academia and #ethical work and oversight well before the release of ChatGPT.

This! People have warned about the harmful effects of #AI algorithms on our #society for _literally_ decades now:

RubyConf 2015 - Keynote: Consequences of an Insightful Algorithm by Carina C. Zona
https://www.youtube.com/watch?v=Vpr-xDmA2G4

Biased bots: Human prejudices sneak into AI systems (April 2017):

https://www.bath.ac.uk/announcements/biased-bots-human-prejudices-sneak-into-ai-systems/

1/2

#Bias #Ethics

RubyConf 2015 - Keynote: Consequences of an Insightful Algorithm by Carina C. Zona

YouTube

@olivia

And also let's not forget where all the funding and incentives for developing these AI systems have come from:

It's what american author and social psychologist Shoshana Zuboff calls "Surveillance Capitalism" - she has written about this in _2014_!

https://en.wikipedia.org/wiki/Surveillance_capitalism

These issues are known for _decades_ and so many people have warned about it! And yet here we are...

2/2.5

#SurveillanceCapitalism #Philosophy

Surveillance capitalism - Wikipedia

@olivia

Some people seem to be living only in the now. Neither the past nor the future are relevant to them.

2.5/2.5