Large language models like #chatgpt represent a mental trap, exploiting a cognitive bias we have for competent use of language. If a person can write with good grammar, we regard them as intelligent. That a bit of code can do it leads us to believe that there's an intelligence behind it, and that causes us to misjudge its capabilities.

LLMs are models of language, not models of fact or truth. That they produce truth sometimes is an accident. They're not search engines or oracles

@ElectricDoorknob I don’t know if seeing truth from a LLM is an accident, is that not also a result of training?

@dangeratio sure, but to the same degree that non-truths are also a result of training.

My point is that lots of people are expecting things from these tools that they can't provide by design. LLMs produce (to borrow a phrase) aesthetically plausible output. Based on their training and the input they put together words in a way that sounds good, but their adjacency to fact is a consequence of the statistical context of the training rather than any apprehension of facts.

@ElectricDoorknob I think that sounds a lot like apprehension of facts most people repeat facts they’ve heard not things they know from first principals, seems like the same thing yeah?
@dangeratio it's true, lots of people do present like weird broken tape recorders 🤣