One could spend all day every day telling folks using "mental activity words" to describe LLM output that they are wrong.

It does not "know", it does not "think", it does not "guess", it does not "figure out", it does not "reason", it does not "decide", it does not "feel", it does not "opine", it does not "believe", it does not "see", it does not "lie", it does not do *anything* you'd use a mental activity word for.

And every time they do it, anyway? They make the world just a little worse.

@GeePawHill we humans cannot help but anthropomorphise. It's what we do, central to our drive for narrative.
@tomasekeli @GeePawHill
Yes. In this case though anthropomorphizing contributes to a false narrative of what LLMs are capable of, and people don't see that.