When we use words like "introspection", "hallucination", "understand", "discover", and so on when we're talking about LLMs, we make a dangerous mistake. LLMs have no consciousness, agency, nor self-awareness, and using such terms can make it seem like they do.
(Even "writing code" hits different than "generates code".)
This isn't a pro- or anti-AI comment, it's a truth vs. lying (perhaps to oneself) comment. How we (especially the sellers of trained models) talk about these statistical token generators affects how/when/if we use them and what we expect of them.