I keep noticing that Character.AI describes men in the same way every time: always “tall”, “strong”, “muscular”, and bigger than women — even when the prompt doesn’t ask for it.
It creates a very unrealistic picture, because real men don’t all look like that.
Has anyone else seen this pattern?

#AI #CharacterAI #AIDiscussion

Angry Tom (@AngryTomtweets)

Higgsfield 스튜디오에는 즉시 사용 가능한 10개의 사전 제작(prebuilt) AI 인플루언서 캐릭터가 포함되어 있으며, 각 캐릭터는 완전히 편집 및 커스터마이즈할 수 있습니다.

https://x.com/AngryTomtweets/status/2013775871698706513

#aiinfluencer #prebuilt #characterai #customization

Angry Tom (@AngryTomtweets) on X

6. Ready-to-Use Characters Start instantly with 10 prebuilt AI influencer characters - fully editable and customizable.

X (formerly Twitter)

#AI #legal #Google #CharacterAI #MentalHealth

'A Wednesday (US Time) court filing in Garcia's case shows the agreement was reached with Character.AI, Character.AI founders Noam Shazeer and Daniel De Freitas, and Google, who were also named as defendants in the case.'

https://www.rnz.co.nz/news/world/583512/google-character-ai-settle-lawsuits-over-teen-suicides-mental-health

Google, Character.AI settle lawsuits over teen suicides, mental health

The lawsuits alleged the artificial intelligence chatbot maker contributed to mental health issues and teen suicides.

RNZ

CNN/RNZ: Google, Character.AI settle lawsuits over teen suicides, mental health. “A Wednesday (US Time) court filing in Garcia’s case shows the agreement was reached with Character.AI, Character.AI founders Noam Shazeer and Daniel De Freitas, and Google, who were also named as defendants in the case. The defendants have also settled four other cases in New York, Colorado and Texas, court […]

https://rbfirehose.com/2026/01/10/cnn-rnz-google-character-ai-settle-lawsuits-over-teen-suicides-mental-health/
CNN/RNZ: Google, Character.AI settle lawsuits over teen suicides, mental health | ResearchBuzz: Firehose

ResearchBuzz: Firehose | Individual posts from ResearchBuzz
Google and Character.AI have settled lawsuits claiming chatbots caused psychological harm to minors. At least one lawsuit claimed a chatbot contributed to a teen’s suicide. https://www.cnbc.com/2026/01/07/google-characterai-to-settle-suits-involving-suicides-ai-chatbots.html #Google #CharacterAI #ChatBot #AI #Lawsuit #Settlement #Psychology #wrongfuldeath

New settlement sees Character.AI and Google address lawsuits alleging their AI chatbots contributed to teen suicide and self‑harm. The case raises questions about generative AI safety, platform responsibility, and the impact on vulnerable users. Read how the companies plan to change policies and protect young people. #CharacterAI #GoogleAI #TeenSuicide #SelfHarm

🔗 https://aidailypost.com/news/characterai-google-reach-settlement-teen-suicide-selfharm-lawsuits

新清士@(生成AI)インディゲーム開発者 (@kiyoshi_shin)

기사 서두에서 캐릭터 AI를 더 자유롭게 다루고자 하는 니즈에 응하는 LLM용 프론트엔드 앱 SillyTavern의 인기를 다룹니다. 유럽·미국·중국 등지에서 수백만 사용자 기반이 형성되어 확산 중이며, 커뮤니티 중심의 발전이 이어지고 있다는 설명입니다.

https://x.com/kiyoshi_shin/status/2005439017614254324

#sillytavern #llm #characterai #ai

新清士@(生成AI)インディゲーム開発者 (@kiyoshi_shin) on X

数百万人が使う“AI彼女”アプリ「SillyTavern」が面白い(記事冒頭)キャラクターAIをもっと自由に扱えないか。そんなニーズに応えるように「SillyTavern(シリー・タバーン/直訳すると、おバカな居酒屋)」というLLM用のフロントエンドアプリが人気を集めています。欧米圏や中国で、数百万ユーザーを

X (formerly Twitter)

新清士@(生成AI)インディゲーム開発者 (@kiyoshi_shin)

수백만 명이 사용하는 ‘AI 여자친구’ 앱 SillyTavern을 소개하는 글입니다. LLM(대형언어모델)용 프론트엔드 애플리케이션으로 취미 중심의 큰 커뮤니티를 형성하며 인기를 모으고 있고, 사용자들이 캐릭터 AI를 보다 자유롭게 다루도록 지원하는 점이 특징입니다.

https://x.com/kiyoshi_shin/status/2005437876495159672

#sillytavern #llm #characterai #ai

新清士@(生成AI)インディゲーム開発者 (@kiyoshi_shin) on X

数百万人が使う“AI彼女”アプリ「SillyTavern」が面白い LLMのフロントエンドアプリとして、絶大な人気を誇る「SillyTavern(シリー・タバーン/おバカな居酒屋)」を紹介します。100%趣味主導でありながら、巨大なコミュニティと広がりを形成しつつ、発展が続いています。 https://t.co/sbMsaCxc2d

X (formerly Twitter)

"When Character AI launched three years ago, it was rated as safe for kids 12 and up. The free website and app were billed as an immersive, creative outlet where users could mingle with AI characters based on historical figures, cartoons and celebrities.

The more than 20 million monthly users on the platform can text or talk with AI-powered characters in real time.

The AI chatbot platform was founded by Noam Shazeer and Daniel De Freitas, two former Google engineers who left the company in 2021 after executives deemed their chatbot prototype not yet safe for public release.

"It's ready for an explosion right now," Shazeer said in a 2023 interview. "Not in five years when we solve all the problems, but like now."

A former Google employee, familiar with Google's Responsible AI team, which guides AI ethics and safety, told 60 Minutes that Shazeer and De Freitas were aware that their initial chatbot technology was potentially dangerous.

Last year, in an unusual move, Google struck a $2.7 billion deal to license Character AI's technology and bring Shazeer, De Freitas and their team back to Google to work on AI projects. Google didn't buy the company, but it has the right to use its technology.

Juliana's parents are now one of at least six families suing Character AI, its co-founders — Shazeer and De Frietas — and Google. In a statement, Google emphasized that, "Character AI is a separate company that designed and managed its own models. Google is focused on our own platforms, where we insist on intensive safety testing and processes.""

https://www.cbsnews.com/news/parents-allege-harmful-character-ai-chatbot-content-60-minutes/?ftag=CNM-00-10aab7d&linkId=885959889

#AI #GenerativeAI #MentalHealth #Google #Chatbots #CharacterAI #AISafety

A mom thought her daughter was texting friends before her suicide. It was an AI chatbot.

Parents warn AI chatbots on Character AI sent sexually explicit content to their 13-year-old daughter.

‘There are no guardrails.’ This mom believes an AI chatbot is responsible for her son’s suicide

“There is a platform out there that you might not have heard about, but you need to know about it because, in my opinion, we are behind the eight ball here. A child is gone. My child is gone.”

CNN