Father sues Google, claiming Gemini chatbot drove son into fatal delusion
Father sues Google, claiming Gemini chatbot drove son into fatal delusion
On the one hand, these LLM companies really shouldn’t be foisting their beta technology on unwary users. If a Google employee couldn’t tell someone to kill themselves and get away with it, why is it that they get to absolve themselves of responsibility if the sentence is generated by an LLM?
On the other hand, people in the future will look at the early LLM users (people who used it in the first few years) as complete idiots. It’s like the scientists who first studied radiation who just poked at radioactive things without understanding the danger. Or, like doctors who used to do surgery without washing their hands. They’ll hopefully understand that it was a new technology so we were dumb about it. But, they’ll still think that people were absolute idiots for feeding text into “spicy autocomplete” and then taking whatever it generated at face value.
But the users don’t necessarily know they’re interacting with “spicy autocomplete” because the companies aren’t promoting and presenting it as such. They’re promoting it as “your personal AI assistant” and the main way most people interact with these systems is through a chat interface. That in the background the model is being front-loaded with context and stuff gets added to the user’s prompts in order to get the model to autocomplete something that looks like a transcript of a conversation is hidden, so from the user’s perspective it just looks like they’re having a conversation with “something”.
Even for people who know in their heads how the sausage is made, the illusion might be strong enough to override that knowledge. I imagine it’s kind of like when real people interact with Muppets; from what I hear, they still end up perceiving them as people, even though they can see the person with his arm up Kermit’s ass.
I imagine it’s kind of like when real people interact with Muppets; from what I hear, they still end up perceiving them as people, even though they can see the person with his arm up Kermit’s ass.
It’s a “known failure mode” of humans that they anthropomorphize things, that they spot patterns that aren’t actually there, that they assign agency when something is random, etc.
An LLM is a machine designed specifically to produce plausible text. It analyzes billions of books and web pages to figure out the structure of language. Then it is given a bunch of text and it figures out what is likely to come next. It’s obvious what humans will do when exposed to something like that.
Individual humans should be smart enough to say “We humans are flawed, I better approach this cautiously”. But, as a society we should also protect individual humans from themselves by making laws that prevent them from being preyed on.