Father sues Google, claiming Gemini chatbot drove son into fatal delusion

https://lemmy.world/post/43862516

Father sues Google, claiming Gemini chatbot drove son into fatal delusion - Lemmy.World

Lemmy

On the one hand, these LLM companies really shouldn’t be foisting their beta technology on unwary users. If a Google employee couldn’t tell someone to kill themselves and get away with it, why is it that they get to absolve themselves of responsibility if the sentence is generated by an LLM?

On the other hand, people in the future will look at the early LLM users (people who used it in the first few years) as complete idiots. It’s like the scientists who first studied radiation who just poked at radioactive things without understanding the danger. Or, like doctors who used to do surgery without washing their hands. They’ll hopefully understand that it was a new technology so we were dumb about it. But, they’ll still think that people were absolute idiots for feeding text into “spicy autocomplete” and then taking whatever it generated at face value.

People have a natural tendency to personify things they know are not people. I’ve seen studies involving (physical) robots, and the robot says it’s sad or something, and people feel bad for it, even though it’s literally just a piece of plastic with some wires inside that looks vaguely humanoid. I don’t think this is going to change in the foreseeable future.

I don’t think it’s going to change either, so we need to adjust the way we do things to compensate.

We put seatbelts and airbags in cars because we know that people are going to drive like idiots. Maybe we need similar rules around LLMs to save people from their own instincts.