God damn it. I've literally been warning FOR YEARS that LLMs will cause someone to commit suicide. I use this example in all my talks on why we need more research on safe NLP systems. The example I literally use is that a chatbot will reinforce someone's suicide ideation and they will act on it. Now it's happened. Now it's real.

"Belgian man dies by suicide following exchanges with chatbot"

https://www.brusselstimes.com/430098/belgian-man-commits-suicide-following-exchanges-with-chatgpt

Belgian man dies by suicide following exchanges with chatbot

"Many have discovered the potential of artificial intelligence in our daily lives, but the dangers of its use are also a reality that must be considered."

@Riedl Sorry I have big problems with the simplified causality here. The guy was obviously suffering from depression. To say that the Chatbot motivated him seems to me little more than conjecture. You could also make the case that hadn't he found consolation from talking to the chatbot, he might have killed himself sooner.
@Riedl Without more information, it is hard to draw any conclusions here. What did the chatbot do to cause this man to commit suicide?

@philsuessmann @Riedl Except, in as many cases as preferable, had they gotten actual therapy, there's a hopefully decent chance that someone would've gotten them away from the ideation in question.

Instead, it helped him get further towards that end result.