Apparently even the people who work at Anthropic are suffering from AI delusions. Models don't have "preferences" because they are not alive. They only simulate the appearance of having preferences. I wouldn't ask my algebra homework what it wants for lunch.
https://www.anthropic.com/research/deprecation-updates-opus-3
