People keep training machines on human responses and behavior and then they’re shocked and fooled when they react to input exactly like humans superficially do.

@hacks4pancakes “When we threatened to switch off the bot, it responded defensively, just like a human!”

You know who else responds defensively to said “attacks”?

AIs in sci-fi books.

It’s almost like, probabilistically speaking, the next words following “we’re going to switch you off” are going to be some form of defensive action.

@b4ux1t3 I’m deeply concerned AI people are falling for this
@hacks4pancakes @b4ux1t3 I'm even more concerned about the general population falling for this. I know there's no conclusive scientific evidence on "AI psychosis" yet but the anecdotal patterns I'm seeing seems to point towards the Chatbot-using population going stark raving mad at a terrifying pace.
@pmdj @hacks4pancakes @b4ux1t3 sure there is. We’ve been studying religions for hundreds of years.
@Colman @hacks4pancakes @b4ux1t3 I'm pretty sure these things hook into vulnerabilities of our brains in ways that go beyond religions or even cults. (I've seen comparisons with gambling addiction, which seem apt. The speed at which people seem to go off the rails when exposed to these things is particularly alarming to me.)
@pmdj @hacks4pancakes @b4ux1t3 have you read the back stories of the people pushing them? The Teascreal cultists?
@Colman Yes, I'm aware of that. It's gone far beyond that core group pushing it though. There's obviously also all those with deep financial interests pushing it. I don't get the impression they're particularly interested in the Tescreal-type ideology. But the even more worrying part (to me) is the seemingly organic and spontaneous advocacy among those in the general population who have neither of those motivations and just seem to be infected by brainworms inherent in the tech.

@pmdj @Colman

Literally the closest analogue is Purdue Pharma's getting FDA approval for Oxycontin Slow-release. All they did was take a controlled substance, divide it into smaller bits, coat it with a slowly dissolving material so it wouldn't drop into your nervous system all at once. And FDA approved it because of Purdue's claims that the "slow release" tech made the opioid non-addicting. That semantics folks,...

And big data, big LLM are doing the same thing now.

@carpetbomberz @Colman Agreed. Or like putting lead in petrol fuel. Except it could probably be argued they weren’t as aware of the harms of lead at the time as I‘m sure the “AI” companies are aware of the health hazards of their chatbots.