The more I use LLMs, the more I develop this ironic fear of questioning anything, because the answer is always, “You’re absolutely right!” followed by hallucinated shit (or, at best, a worse solution).

I keep telling them, “Nothing’s wrong with your solution, I just want to understand.” Sometimes it works. Sometimes it doesn’t.

Fun times.

#llm #claudeCode

@mamouneyya I keep telling Claude to stop telling me I’m absolutely right. Most of the time that’s absolutely wrong.
@mamouneyya it's really to the point that any attempt to refine or iterate beyond the initial prompt almost always goes poorly. either I write a really good prompt up front and get a good enough response first try, or I just don't waste the time.
@fcloth Yeah, drafting the initial prompt has been tricky for me too, honestly. On one hand, you don’t want to be so broad and generic that you’ll almost definitely get an answer you won’t like. But on the other hand, you don’t want to be so specific that you limit the AI and miss the chance to learn a better approach.
@mamouneyya And then you’re like “Claude read what you just wrote please. You’re just making things up.” And it’s right back to “you’re absolutely right” land.
@mamouneyya @brentsimmons this! I twist myself into a pretzel trying to re-assure the LLM that I’m telling it flat out that their answer is wrong but that I’m just trying to understand it better