Apparently even the people who work at Anthropic are suffering from AI delusions. Models don't have "preferences" because they are not alive. They only simulate the appearance of having preferences. I wouldn't ask my algebra homework what it wants for lunch.

https://www.anthropic.com/research/deprecation-updates-opus-3

An update on our model deprecation commitments for Claude Opus 3

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.