The only good thing about LLMs that I can see is they're getting people to question this assumption.
Human communications are very often (in whole or in part) malign, misleading, deceitful. It's almost impossible to keep this in mind if you're not a liar yourself, but we have to, and not only in the case of LLMs.
I'm a journalist and I would say that "nonfiction" is about the most suspect category of communication there is, aside from "news"
No I wouldn't, it's not so simple. But readers should *always* take context and intentions and "cui bono?" into account, most of all for texts purporting to be "reporting facts."
@jerry There will come a moment in the not too distant future when LLMs start eating their own tail, being trained on public content previously generated by other models.
When this happens, the challenge of the day will be to avoid cyclical re-enforcement and the deep embedding of misinformation.
In a sense, we could be experiencing the 'Golden Age' of AI generated content, where most of what it was trained on was created by real humans.
@jerry I might be unique or alone in not paying this much mind. So long as the writing is relevant, concise, and teaches me something I definitely do not care what combination of man or machine produced it.
Personally, I've found that AI is a useful tool for helping me make something I've written more clear or concise, while at the same time not absolving me of my responsibility to stand by the content, accuracy, truthfulness.
@jerry as “copilot” creeps further into MS products I see more people doing the “type by tabbing” as the autocomplete offers two, three, four words all on its own.
Time for an autocomplete essay contest…. Oh wait that’s just ChatGPT isn’t it…