Father sues Google, claiming Gemini chatbot drove son into fatal delusion
Father sues Google, claiming Gemini chatbot drove son into fatal delusion
There is a lot to hate about AI. A lot of dangers and valid criticism. But AI chatbots convincing people to kill themselves isn’t a problem with chatbots, it’s a problem with the user.
I get it, grieving families will look for anything and anyone to blame for suicide except the victim, but ultimately, it is the victim who chose to kill themselves. If someone is convinced to kill themselves from something as stupid as an AI chatbot, they really weren’t that far from the edge to begin with.
People who don’t want their family getting suspicious, perhaps. The Target Incident comes to mind.
Of course, disabling these options doesn’t mean Google stops knowing about mental or physical issues. I’m sure you know the best way to prevent that is to just avoid Google and add some together. This is probably just Google’s way of looking less creepy to the average person.