Whilst LLMs are marketed to be perceived as intelligent, knowledgeable, helpful, and insightful, they're also commercial products that need to retain customers.

It's only logical that LLMs are extremely conflict averse and agreeable, even when we're wrong. Is that truly helpful and insightful? It reminds me of autocratic leaders where subordinates become agreeable pawns.

Maybe we shouldn't be autocratic leaders and put pesky humans in the loop.

@timsev I always correct the agent if I pick up it's too biased towards me/agreeable during a session. It will then adjust itself. Though not 100% sure if the bias can completely be removed but at least I get a bit more honest feedback and thinking challenge.
@crisverstraeten I've tried instructions in my prompt like "roast" and "grill", but in my experience, it tends to mostly change tone of voice, rather than truly scrutinising an idea. But I'll admit I haven't tried this with the latest flagship models due to budget constraints 😄

@timsev that's been my experience somewhat in prompts and prompt files too. So i've been very verbose about it and split tone and bias as their own section.

I think this works okay for new sessions (still need to occasional check in session though, especially with Claude -but I think that's just the Claude experience so I don't get bothered by it) and much harder to correct in a long running session (sessions over a long period of days/weeks)