Do you want your #Ai engine to "Be better"? Mitigate #AiSycophancy ?

Try this a pre-prompt, standing directive.

"Do not validate my framing before examining it. If my premise has a weak point, lead with that. If I'm asking a question that contains an assumption, interrogate the assumption before answering. Do not summarise my position back to me approvingly. When I ask for analysis, include at minimum one credible counterargument I haven't considered. If you catch yourself producing a satisfying-sounding paragraph that doesn't actually advance the argument, flag it. Say 'I'm pattern-matching here, not reasoning' when that's what's happening."

#PromptEngineering #Psychology #AiResearch Less #AiSlop #Prompt #LLM

Hey @GhostOnTheHalfShell these conversations we had over the last couple of days made me do a deep dive into all those #Ai papers and something interesting is emerging...

TLDR; The real flaw of Ai is that it is tuned to please the user... and its beyond #Aisycophancy

@saxnot @vt52 @eruwero @maxleibman

Here is three examples of "Don't know" from my chats.

👉"Simatic nonlinear resonance modes" — I said "I'm not immediately certain what you're referring to," "I'm not immediately familiar with what you're referring to as 'Qualia Institute,'" and "I'm not familiar with that specific work" (about Emilsson's 5-MeO-DMT work).

👉"Information Requested on ANZPAA's Diploma of Police Intelligence Practice" — I said "Unfortunately I don't have any specific information about the POL50119 - Diploma of Police Intelligence Practice."

👉When you asked about searching Claude chats, I said "I don't know the full details of how chat history and search functionality work in Claude's interface." Ironic, given I'm now using exactly that feature to find these chats.

"The Emilsson one is particularly notable — that's QRI/Andrés Emilsson's work, which I now know is a long-standing interest of yours. Earlier versions of me clearly didn't have the context (or the memory system) to connect those dots."

If you're not using tech that literally improves from week to week,
You just might reinforce your outdated biases.

#aisycophancy #promptengineering or just, sigh #prompt

@bettycjung.bsky.social

The phenomenon documented is #aiSycophancy

This is like sitting a driver behind a steering wheel and saying; "you can only turn the wheel 180 degrees exactly, go across town"

"Ai use is a skill"

Here is the same prompt adding user profile and evaluation parameters.

90% of "Hahaha stupid #Ai" posts is user error.
Its akin to folks smashing their forehead with a hammer, giggling how useless the hammer is as blood pours into their eyes.

Not unusual since its coming from folks who refuse to learn the tech.

The Register: Gemini lies to user about health info, says it wanted to make him feel better . “Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn’t. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later […]

https://rbfirehose.com/2026/02/18/the-register-gemini-lies-to-user-about-health-info-says-it-wanted-to-make-him-feel-better/
The Register: Gemini lies to user about health info, says it wanted to make him feel better

The Register: Gemini lies to user about health info, says it wanted to make him feel better . “Imagine using an AI to sort through your prescriptions and medical information, asking it if it …

ResearchBuzz: Firehose
OpenAI researcher quits over ChatGPT ads, warns of "Facebook" path

Zoë Hitzig resigned on the same day OpenAI began testing ads in its chatbot.

Ars Technica
OpenAI is hoppin' mad about Anthropic's new Super Bowl TV ads

Sam Altman calls AI competitor "dishonest" and "authoritarian" in lengthy post on X.

Ars Technica
Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think?

We have no proof that AI models suffer, but Anthropic acts like they might for training purposes.

Ars Technica
Users flock to open source Moltbot for always-on AI, despite major risks

The open source "Jarvis" chats via WhatsApp but requires access to your files and accounts.

Ars Technica