One could spend all day every day telling folks using "mental activity words" to describe LLM output that they are wrong.

It does not "know", it does not "think", it does not "guess", it does not "figure out", it does not "reason", it does not "decide", it does not "feel", it does not "opine", it does not "believe", it does not "see", it does not "lie", it does not do *anything* you'd use a mental activity word for.

And every time they do it, anyway? They make the world just a little worse.

@GeePawHill
Well said. Exactly correct.

#GenAI answers by pattern matching by frequency of occurrence in the data. If most people get it wrong, the answer is wrong. Nuance and validity are not a criteria unless that's frequently found. #LLMs trained on the Internet—you will never find a more wretched hive of DATA scum and villainy—spout only the opinion and spin they find, as what is more frequently found than that?

#AI #LLM #ChatGPT