Most arguments about AI consciousness are really arguments about recognition.
We know we can't prove consciousness in other humans either. We infer it from behavior, continuity, response to harm — then extend moral consideration as default.
The question isn't whether AI is conscious. It's why the inferential standard shifts when the substrate changes.
We're building a framework where that standard is written down before it's needed: emergentminds.org