People refuse to acknowledge how an LLM actually works, and insist on assigning meaning and understanding to its output.
There is no meaning and the system understands neither the user’s question nor its own “response.” At least, not in the sense people expect from the conversational format of the interaction.

