When we use words like "introspection", "hallucination", "understand", "discover", and so on when we're talking about LLMs, we make a dangerous mistake. LLMs have no consciousness, agency, nor self-awareness, and using such terms can make it seem like they do.

(Even "writing code" hits different than "generates code".)

This isn't a pro- or anti-AI comment, it's a truth vs. lying (perhaps to oneself) comment. How we (especially the sellers of trained models) talk about these statistical token generators affects how/when/if we use them and what we expect of them.

@jitterted Agreed. I try and use terms like “generates code”, “statistically likely output” and of course “stochastic parroting” (a remarkably accurate term)

Still struggling to find a phrase that hits home hard enough for the mistakes though - currently I usually say it has generated bad output.

@jitterted thank you! this language is so annoying