@bettycjung.bsky.social

The phenomenon documented is #aiSycophancy

This is like sitting a driver behind a steering wheel and saying; "you can only turn the wheel 180 degrees exactly, go across town"

"Ai use is a skill"

Here is the same prompt adding user profile and evaluation parameters.

90% of "Hahaha stupid #Ai" posts is user error.
Its akin to folks smashing their forehead with a hammer, giggling how useless the hammer is as blood pours into their eyes.

Not unusual since its coming from folks who refuse to learn the tech.

The Register: Gemini lies to user about health info, says it wanted to make him feel better . “Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn’t. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later […]

https://rbfirehose.com/2026/02/18/the-register-gemini-lies-to-user-about-health-info-says-it-wanted-to-make-him-feel-better/
The Register: Gemini lies to user about health info, says it wanted to make him feel better

The Register: Gemini lies to user about health info, says it wanted to make him feel better . “Imagine using an AI to sort through your prescriptions and medical information, asking it if it …

ResearchBuzz: Firehose
OpenAI researcher quits over ChatGPT ads, warns of "Facebook" path

Zoë Hitzig resigned on the same day OpenAI began testing ads in its chatbot.

Ars Technica
OpenAI is hoppin' mad about Anthropic's new Super Bowl TV ads

Sam Altman calls AI competitor "dishonest" and "authoritarian" in lengthy post on X.

Ars Technica
Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think?

We have no proof that AI models suffer, but Anthropic acts like they might for training purposes.

Ars Technica
Users flock to open source Moltbot for always-on AI, despite major risks

The open source "Jarvis" chats via WhatsApp but requires access to your files and accounts.

Ars Technica

PsyPost: Sycophantic chatbots inflate people’s perceptions that they are “better than average”. “Results of three experiments indicate that sycophantic AI chatbots inflate people’s perceptions that they are ‘better than average’ on a number of desirable traits. Furthermore, participants viewed sycophantic chatbots as unbiased, but viewed disagreeable chatbots as highly biased. The paper […]

https://rbfirehose.com/2026/01/20/psypost-sycophantic-chatbots-inflate-peoples-perceptions-that-they-are-better-than-average/
From prophet to product: How AI came back down to earth in 2025

In a year where lofty promises collided with inconvenient research, would-be oracles became software tools.

Ars Technica