Father sues Google, claiming Gemini chatbot drove son into fatal delusion

https://reddthat.com/post/61324333

Father sues Google, claiming Gemini chatbot drove son into fatal delusion - Reddthat

Lemmy

“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

The complaint lays out an alarming string of events: first, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a “file server at the DHS Miami field office” and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV’s license plate; the chatbot pretended to check it against a live database.

“Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”

Well, that’s pretty fucked up…

That’s fucking crazy. Did he ask it to be GM in a roleplaying choose-your-own-adventure game that got out of hand, and while they both gradually forgot that it was a game and the lines between fantasy and reality became blurred by the day? Or did it just come up with this stuff out of nowhere?
In every other case of AI bots doing this, the bot will always affirm whatever the person says. So if they say something a little weird, the AI will confirm it and feed it further. This happens every time. The bots are pretty much designed to keep talking to the person, so they’re essentially sycophantic by design.

I just tried this with ChatGPT three days ago and there’s a chance they have tried to make it slightly less sycophantic

I was essentially trying to get it to tell me I was the smartest baby born in whatever year like that YouTuber—different example but it was so resistant to agreeing to me or my idea or whatever being unique/exceptional.

Hope this is a specific direction and not random chance, A/B testing, etc.

Or you just really really are not the smartest baby.