I personally consider "I asked ChatGPT to generate a response to you" not witty but a form of an insult. Don't do that please. If that is how you want to talk to people at least don't tell me. It's offensive.

@tante Why? Because it suggests you are not smart enough to do something? Or some other reason?

I guess it is transparent. It provides help (the best given person can give, probably), the source and instructions how to get it by yourself in the future. If you don't like it, you can just ignore the answer. Or person.

@rozie If I wanted the output of ChatGPT, I would've asked ChatGPT myself. When I ask a groups of humans specifically because I want an answer from an experienced human user and someone replies with LLM output, that adds absolutely nothing of value to the conversation. It furthermore implies that the person thinks I don't know how to use ChatGPT, which is insulting. It just wastes my time.
@davidculley So giving an answer based on RFC, encyclopedia, book, manual or Wikipedia implies that a person asking a question does not know how to use them and is insulting too?
If not, what makes the difference?

@rozie @davidculley With an encyclopedia, RFC or book you know the source of the information and know that it has been generated by someone who actually understands things. You can also then judge the likely accuracy based on who that person was — for example, if it’s a book by Dinesh D'Souza you can safely assume it’s garbage. Whereas with ChatGPT the output might be a big pile of D’Souza and you’ll have no way to know.

If people wanted to ask ChatGPT and then check the output against trustworthy sources and provide citations, that would be fine, but they never do.

@mathew @davidculley Well, do you know all the authors of the books? If not, how do you know are they trustworthy or not? People tend to bias, they sometimes misunderstand things... If the answer comes from a website, not a book, it's even harder.

And Wikipedia? Mix of authors, hard to check (does anybody even check?), often biased, with websites as sources.

And at last an answer without a source, from human. What is the value? Any way to check it? Likely not.

@rozie @davidculley I specifically gave an example of how knowing the author can let you evaluate the accuracy of the information even if you don’t know them personally, based on author reputation.

For sources like Wikipedia, you’re relying on multiple knowledgeable people reviewing the information, and if you really want you can check the resulting discussion and edit history.

For bot text, you know literally nothing. You don’t know who originally wrote it, where it came from, if anyone with knowledge has ever reviewed it, when it was written and whether it might be out of date…

@mathew @davidculley Author reputation changes in time. It's probably less important in the case of science, because it's often the best knowledge at given moment of time, but in the case of history it's often influenced by current political views.

Regarding Wikipedia and "multiple people are reviewing" - we know that this process doesn't work in practice. There are many examples from open source when grave security bugs sit unspotted for years.

Checking sources? You claim that...

@mathew @davidculley ...they "never check" (I don't agree with that, I do check, and know many people that check). Checking source in case of LLM is just clicking the link. Way faster and easier than checking edit history on Wikipedia. OTOH I don't know anyone who checks Wikipedia authors and edit history.
@rozie @davidculley The LLMs that provide citations are basically acting as incredibly inefficient search engines, and the citation may not even agree with their “summary”, since they don’t actually summarize based on meaning. And the times I’ve seen people post bot output, there haven’t been citations.