Conviently, 2 of my posts today dovetail nicely. This post ...

https://mastodon.nzoss.nz/@strypey/115660611781101579

... linked to a Psychology Today article about AI psychosis, which says;

"This phenomenon, which is not a clinical diagnosis, has been increasingly reported in the media and on online forums like Reddit, describing cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals.*

https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

(1/2)

Strypey (@strypey@mastodon.nzoss.nz)

If you don't believe that ongoing interaction with proprietary digital systems has any power to shift people's thinking in unhealthy ways, you're not paying attention; https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis Which would be very unusual for Cory, so I'm puzzled as to why he keeps pushing this talking point. Maybe it's his case for why people need to read Chokepoint Capitalism or Enshittification if they've already read Zuboff's book? They do, but there are better ways to make that argument. (2/2) #AI #MOLE

Mastodon - NZOSS

I also posted today about the total failure of drug prohibition;

https://mastodon.nzoss.nz/@strypey/115658934648378795

A risk of psychosis is one of the few health risks ascribed to cannabis use that has *some* evidence behind it. Although prohibitionists making claims about it vastly overstate their case;

Shock! Horror! Cannabis causes psychosis!?!

(It doesn't, but it can trigger preexisting risks in a small number of people;

(2/3)

Strypey (@strypey@mastodon.nzoss.nz)

"For gangs, the Misuse of Drugs Act (1975) has been one of the best recruiting tools they’ve ever had. Criminalisation of a raft of illicit drugs not only provides them with a wildly lucrative black market. ... Once users are prosecuted and get sent to jail, gangs offer not only a vital form of self-preservation on the inside, but are (often) the only willing employer waiting on the outside." #GordonCampbell, 2025 https://www.scoop.co.nz/stories/HL2511/S00036/on-why-we-should-de-criminalise-personal-drug-use.htm #drugs #DrugLawReform #decriminalisation

Mastodon - NZOSS

Anyway, I'm looking forward to seeing all the hardened prohibitionists who attack drug law reform efforts - only because they're concerned about mental health, of course - these knee-jerk conservatives, publicans, newspaper editors and alcohol industry lobbyists, all lobbying to ban generative models;

Shock! Horror! AI causes psychosis!?!

They're not going to ignore the same risk when it comes from a different source, are they? : P

(3/3)

"Another case involved a man with a history of a psychotic disorder falling in love with an AI chatbot and then seeking revenge because he believed the AI entity was killed by OpenAI. This led to an encounter with the police in which he was shot and killed."

#MarlynnWei M.D., J.D, 2025

https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

Holy spitballs!

Prohibitionists;

Shock! Horror! ChatGPT uses can cause DEATH!?!

#AI #MOLE #AIPsychosis

The Emerging Problem of "AI Psychosis"

AI may be fueling psychotic delusions in a phenomenon known as "AI psychosis" or "ChatGPT psychosis." New research explains the risks.

Psychology Today

People are being pushed into psychosis and in some cases even dying because of generative models. Given that, I think there may be a case for a moratorium on making them available to the general public.

(1/?)

#PolicyNZ #TechRegulation

This could be a more principled replacement for the B416 campaign to ban "social media" for under-16s. Because a lot of the platform effects the supporters of that campaign are concerned about are caused by the use of generative models to determine what platforms serve up to people and when.

But to turn it into public policy, we'd need to come up with a legally sound definition of "generative model" that includes non-chatbot uses, but excludes other forms of AI.

(2/?)

We'd also need clear criteria for when a platform can be excluded from the moratorium. Which would be either;

a) they either commit to algorithmic transparency, so public regulators can confirm they're not using generative models, or not exposing the public to them.

b) they can prove the risks of their generative models have been researched and mitigates. With any and all safety studies registered before commencing, and compulsory publication of results, so they can't cherry-pick.

(3/3)

@strypey I would outlaw ChatGPT and legalize cannabis.

The difference is: ChatGPT has ZERO positive impacts to outweigh the risks.