People keep training machines on human responses and behavior and then they’re shocked and fooled when they react to input exactly like humans superficially do.

@hacks4pancakes “When we threatened to switch off the bot, it responded defensively, just like a human!”

You know who else responds defensively to said “attacks”?

AIs in sci-fi books.

It’s almost like, probabilistically speaking, the next words following “we’re going to switch you off” are going to be some form of defensive action.

@b4ux1t3 I’m deeply concerned AI people are falling for this
@hacks4pancakes @b4ux1t3 I'm even more concerned about the general population falling for this. I know there's no conclusive scientific evidence on "AI psychosis" yet but the anecdotal patterns I'm seeing seems to point towards the Chatbot-using population going stark raving mad at a terrifying pace.

@pmdj @hacks4pancakes @b4ux1t3

Guess I’m gonna add LLMs to my Great Filter candidates list.

@shane @pmdj @hacks4pancakes “they were filtered out by ai!”

“No, their filter is still nuclear weapons, they didn’t actually make AI, they thought they did, and that lead to the catastrophe. It’s funny because no one was at fault except in their idiocy. The LLMs didn’t even launch a nuke, they just let it design the system that did.”