"When Character AI launched three years ago, it was rated as safe for kids 12 and up. The free website and app were billed as an immersive, creative outlet where users could mingle with AI characters based on historical figures, cartoons and celebrities.
The more than 20 million monthly users on the platform can text or talk with AI-powered characters in real time.
The AI chatbot platform was founded by Noam Shazeer and Daniel De Freitas, two former Google engineers who left the company in 2021 after executives deemed their chatbot prototype not yet safe for public release.
"It's ready for an explosion right now," Shazeer said in a 2023 interview. "Not in five years when we solve all the problems, but like now."
A former Google employee, familiar with Google's Responsible AI team, which guides AI ethics and safety, told 60 Minutes that Shazeer and De Freitas were aware that their initial chatbot technology was potentially dangerous.
Last year, in an unusual move, Google struck a $2.7 billion deal to license Character AI's technology and bring Shazeer, De Freitas and their team back to Google to work on AI projects. Google didn't buy the company, but it has the right to use its technology.
Juliana's parents are now one of at least six families suing Character AI, its co-founders — Shazeer and De Frietas — and Google. In a statement, Google emphasized that, "Character AI is a separate company that designed and managed its own models. Google is focused on our own platforms, where we insist on intensive safety testing and processes.""
https://www.cbsnews.com/news/parents-allege-harmful-character-ai-chatbot-content-60-minutes/?ftag=CNM-00-10aab7d&linkId=885959889
#AI #GenerativeAI #MentalHealth #Google #Chatbots #CharacterAI #AISafety