Large language models like #chatgpt represent a mental trap, exploiting a cognitive bias we have for competent use of language. If a person can write with good grammar, we regard them as intelligent. That a bit of code can do it leads us to believe that there's an intelligence behind it, and that causes us to misjudge its capabilities.
LLMs are models of language, not models of fact or truth. That they produce truth sometimes is an accident. They're not search engines or oracles