There's a weird self-defeating property in financial ML that doesn't get enough attention.

If a model learns that pattern X predicts a price move, and that prediction becomes public, traders act on it. The pattern gets arbitraged away. The training data is now stale.

The model ate its own ground truth.

Has anyone done serious work on equilibrium-aware forecasting? Curious whether reflexivity can be modeled explicitly or if it's just an unsolvable prior shift problem.