Google's AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges. Google said in response that "unfortunately AI models are not perfect."

https://lemmy.world/post/43879972

Google's AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges. Google said in response that "unfortunately AI models are not perfect." - Lemmy.World

Lemmy

The fact that AI is “not perfect” is a HUGE FUCKING PROBLEM. Idiots across the world, and people who we’d expect to know better, are making monumental decisions based on AI that isn’t perfect, and routinely “hallucinates”. We all know this.

Every time I think I’ve seen the lowest depths of mass stupidity, humanity goes lower.

Think of the dumbest person you know. Not that one. Dumber. Dumber. Yeah, that one. Now realize that ChatGPT has said “you’re absolutely right” to them no less than a half dozen times today alone.

If LLMs weren’t so damn sycophantic, I think we’d have a lot fewer problems with them. If they could be like “this could be the right answer, but I wasn’t able to verify” and “no, I don’t think what you said is right, and here are reasons why”, people would cling to them less.

The sycopathy is because to make the chat bot (trained on Reddit posts, etc) to respond helpfully (instead of “well ackshually…") and in a prosocial manner they’ve skewed it. What we’re interacting with is a very small subset of the personalities it can exhibit because a lot of them are extremely nasty or just unhelpful. To reduce the chance of them popping up to an acceptable level they’ve had to skew the weights so much that they become like this.

There’s no easy way around that, afaik.

I don’t think that’s the whole story. Like with all of their products, the primary goal of big tech here is to maximise engagement. More engagement means more subscriptions. People are less likely to keep talking to a chatbot that tells them they’re wrong.

The situation would probably improve somewhat if AI companies prioritised usefulness and truthfulness over engagement.