My students are often surprised to learn that LLMs aren’t answering their questions. Rather, an LLM answers the question “what would a reply to this look like?” It’s one of the first things I explain in the “Should I use LLMs?” portion of my syllabus.
@mcnees While I agree thats important to keep in mind, what is not clear is the degree to which people also answer questions that way. LLMs are definitely not what we believe intelligence to be, but could it be that that belief is incorrect?

@scottfweintraub @mcnees >> what is not clear is the degree to which people also answer questions that way.

Yes, it is. They don’t.

>> LLMs are definitely not what we believe intelligence to be, but could it be that that belief is incorrect?

No.

@MisuseCase @scottfweintraub @mcnees Thats not very evidence based.
Honestly I’m pretty skeptical about llms myself, but Im also no longer convinced we understand much about our own intelligence.

@scottfweintraub @mcnees LLMs assign “tokens” to words and work on a kind of map of which tokens are associated with each other, in what sequence. But they don’t “know” what the words mean. They’re not even words, just tokens.

Humans don’t work like this.

@MisuseCase @scottfweintraub @mcnees Current research suggests our brains do actually work a *little* like that. Essentially, they have an internal representation of a concept or a relationship between concepts, which is then turned into an external representation (speech, writing, gestures, etc.) to express that concept or relationship to others. Paraphasia and aphasia are believed to be misfires or disconnections in this token-to-language mapping. This is also believed to be why aphasia affects only the ability to use language, but not intelligence.

Of course, our brains are far more complicated than just their language centers, and the language centers are definitely more efficient than LLMs (both in training and in use).