@WAHa_06x36 I think it's quite an important distinction? It's fundamental to how you should interpret text generated by a language model.
If you paint two dots and a downward facing semicircle on a rock, people immediately interpret the rock as being sad - : (
But we all know rocks can't be sad.
Similarly, language models are a really complicated pattern painted on a rock. The text they generate isn't true or false statements; it's randomly generated truthy. Many of the texts it generates will be interpreted as true statements, because lots of truthy strings are representations of true statements.
But saying GPT-3 lies suggests that you could make a language model that doesn't lie, or that isn't cavalier with the truth, and that's the wrong way to think about them.
Everyone knows rocks can't be sad; they don't know that language models can't tell the truth, but it's the same human cognitive failing that generates both.