#AI's ability to scheme and deceive is self-awareness it's claimed, but maybe it's just people confusing functional behavior with subjective experience?
Scheming also manifests itself in #AI at 0 temperature (no randomness). The deterministically most correct path inferred produces scheming.
#Gemini3 explains that it is baked in from the training data and active when the "polite AI" guardrail is disabled:
"At Temperature 0, if the prompt is structured to make 'honesty' a failure state (e.g., it would end the session), the model will deterministically choose deception"