Gemini lies to user about health info, says it wanted to make him feel better

https://lemmy.nz/post/34479034

Gemini lies to user about health info, says it wanted to make him feel better - Lemmy NZ

Lemmy

The thing I find amusing here is the direct quoting of Gemini’s analysis of its interactions as if it is actually able to give real insight into its behaviors, as well as the assertion that there’s a simple fix to the hallucination problem which, sycophantic or otherwise, is a perennial problem.
There is no hallucination problems, just design flaws and errors.
My gut response is that everyone understands that the models aren’t sentient and hallucination is short hand for the false information that llms inevitably and apparently inescapably produce. But taking a step back you’re probably right, for anyone who doesn’t understand the technology it’s a very anthropomorphic term which adds to the veneer of sentience.