My students are often surprised to learn that LLMs aren’t answering their questions. Rather, an LLM answers the question “what would a reply to this look like?” It’s one of the first things I explain in the “Should I use LLMs?” portion of my syllabus.
@mcnees you’re up against it, trying to develop brains and AI doesn’t wire those pathways for you. And yet, it has to be said that humans apply exactly the same approach to answering questions. It’s probably better to tell them school is a brain gym…

@rpin42 Humans might sometimes resemble this process, but it is not at all accurate to say we apply “exactly the same approach” because we plainly do not. We can remember facts. We can detect inconsistencies. We can detect and ignore superfluous information. An LLM cannot do any of these things.

Sometimes the output looks like they do these things. But the fact that the output looks like the output of thinking doesn’t mean it was the result of thinking, or even the result of a process analogous to thinking. We think. It doesn’t.

@mcnees

@paco @mcnees well, I am a bit of an outlier in the way my brain works…