Guardrails are a scam!

“We have also continued to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

It's impossible cos that's full blown cognition: either the lie is the automation (instead indentured labour is used) or they're fully fibbing.

1/

https://www.theguardian.com/society/2026/mar/31/teenager-asked-chatgpt-most-successful-ways-take-life-inquest-told

Teenager died after asking ChatGPT for ‘most successful’ way to take his life, inquest told

Luca Cella Walker asked chatbot for best way for someone to kill themself on railway line before his death

The Guardian

Don't trust these companies.

Not really anything but marketing.

Good intentions would shut the bots down after first death! Also it's impossible to automate.

AI & any concept relating to it like so-called guardrails are a scam in the deepest sense like a perpetual motion machine or a ouija board — and not only a scam like a pyramid scheme which is a possible way to make money if you are first in first out.

2/

Guardrails are a scam. It's not that they could work, it's that they cannot and never will for such models. By design.

The model is actually designed to output fragments of the input, so-called training data.

Non-industry compromised scientists keep saying these models don't become safe, no matter what, but people keep thinking just because the concept of guardrails is mentioned it must work. By definition, it doesn't. This is not something open to discussion, unless you're a paid shill.

3/

Don't let anybody you care about use a chatbot, especially like this as a friend, without kind and well-intentioned conversations to help them move to never using it!

Cool pincer movement if you truly grasp:

AI & any concept relating to it like so-called guardrails are a scam in the deepest sense like a perpetual motion machine or a quija board — and not only a scam like a pyramid scheme which is a possible way to make money if you are first in first out.

4/

As I say here:

> models give unsafe responses because that is not what they are designed to avoid. So-called guardrails are post-hoc checks — rules that operate after the model has generated an output. If a response isn't caught by these rules, it will slip through

https://www.forbes.com/sites/weskilgore/2025/08/01/can-we-build-ai-therapy-chatbots-that-help-without-harming-people/

5/

Can We Build AI Therapy Chatbots That Help Without Harming People?

AI mental health chatbots promise affordable and immediate support—but can they be trusted? This Forbes report explores the risks, ethics, and future of therapy bots.

Forbes

Perhaps counterintuitive but guardrails are full-blown cognition in the case of models that contain data from the web which obviously also contains inappropriate content. Only human cognition at that point can sort this data into appropriate for a child or not.

https://www.ru.nl/en/research/research-news/dont-believe-the-hype-agi-is-far-from-inevitable

@Iris

6/

Don’t believe the hype: AGI is far from inevitable | Radboud University

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown.

As @Iris and I say:

> assume engineers have access to everything they might conceivably need, from perfect datasets to the most efficient machine learning methods. Even if we give the AGI-engineer every advantage, there is no method of achieving what big tech companies promise.

https://www.ru.nl/en/research/research-news/dont-believe-the-hype-agi-is-far-from-inevitable

7/

Don’t believe the hype: AGI is far from inevitable | Radboud University

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown.

It's so important to understand how evil this rhetorical move they are playing is... It's really just not possible to do what they claim.

And have we seen this before? Absolutely. And there's even more tricks in the book sadly...

AI seems to copy tobacco as well as petrochemical industry rhetorical moves. Be prepared.

https://olivia.science/before

8/

We've been here before!

Parallels between AI and tobacco, and other warnings.

https://olivia.science

Inevitably they will blame psychosis. And we've seen this before with companies and academics claiming lung cancer is caused by stress not smoking!

Remember Hans Eysenck? https://www.theguardian.com/science/2019/oct/11/work-of-renowned-uk-psychologist-hans-eysenck-ruled-unsafe

> This research programme has led to one of the worst scientific scandals of all time

9/

Work of renowned UK psychologist Hans Eysenck ruled ‘unsafe’

Eysenck’s ‘cancer-prone’ personality theory had come under criticism for decades

The Guardian

Long story short on relevant parts: tobacco industry jumped on "stress" to divert from cigs cause cancer, much like AI companies will inevitably do the same for psychosis or wtv to divert from the fact that they cause harm. No user is causing this.

As @Iris and I say: "Industry agendas – whether the industry is tobacco, petroleum, pharmaceuticals, or tech – rarely align with human welfare or disinterested research, especially when left unchecked and unregulated."

https://www.project-syndicate.org/commentary/ai-will-not-save-higher-education-but-may-destroy-it-by-olivia-guest-and-iris-van-rooij-2025-10

10/

AI Is Hollowing Out Higher Education

Olivia Guest & Iris van Rooij urge teachers and scholars to reject tools that commodify learning, deskill students, and promote illiteracy.

Project Syndicate

@olivia @Iris
Reminds me of a paragraph in one of my favorite papers:

>>As the primary studies are mostly funded by industry, this is an example of how the food industry can indirectly influence the choice of review topics—and thus influence the preferred “solutions” to diet-related poor health. Systematic reviews can therefore inadvertently promote individualistic, industry-friendly solutions even in high-quality reviews, if they do not take a wider view of why certain types of intervention are selected, funded, evaluated and published, and by whom, and why—as well as the opportunity costs of such interventions.
>>Methodological rigour alone in systematic reviewing is therefore not a defence against industry-related biases and industry-friendly problem framings.

@olivia @Iris
Paper is:
Petticrew, M., Glover, R. E., Volmink, J., Blanchard, L., Cott, É., Knai, C., Maani, N., Thomas, J., Tompson, A., van Schalkwyk, M. C. I., & Welch, V. (2023). The Commercial Determinants of Health and Evidence Synthesis (CODES): Methodological guidance for systematic reviews and other evidence syntheses. Systematic Reviews, 12(1), 165. https://doi.org/10.1186/s13643-023-02323-0
The Commercial Determinants of Health and Evidence Synthesis (CODES): methodological guidance for systematic reviews and other evidence syntheses - Systematic Reviews

Background The field of the commercial determinants of health (CDOH) refers to the commercial products, pathways and practices that may affect health. The field is growing rapidly, as evidenced by the WHO programme on the economic and commercial determinants of health and a rise in researcher and funder interest. Systematic reviews (SRs) and evidence synthesis more generally will be crucial tools in the evolution of CDOH as a field. Such reviews can draw on existing methodological guidance, though there are areas where existing methods are likely to differ, and there is no overarching guidance on the conduct of CDOH-focussed systematic reviews, or guidance on the specific methodological and conceptual challenges. Methods/results CODES provides guidance on the conduct of systematic reviews focussed on CDOH, from shaping the review question with input from stakeholders, to disseminating the review. Existing guidance was used to identify key stages and to provide a structure for the guidance. The writing group included experience in systematic reviews and other forms of evidence synthesis, and in equity and CDOH research (both primary research and systematic reviews). Conclusions This guidance highlights the special methodological and other considerations for CDOH reviews, including equity considerations, and pointers to areas for future methodological and guideline development. It should contribute to the reliability and utility of CDOH reviews and help stimulate the production of reviews in this growing field.

SpringerLink