@scottfweintraub @mcnees >> what is not clear is the degree to which people also answer questions that way.
Yes, it is. They don’t.
>> LLMs are definitely not what we believe intelligence to be, but could it be that that belief is incorrect?
No.
@scottfweintraub @mcnees LLMs assign “tokens” to words and work on a kind of map of which tokens are associated with each other, in what sequence. But they don’t “know” what the words mean. They’re not even words, just tokens.
Humans don’t work like this.
@MisuseCase @scottfweintraub @mcnees Current research suggests our brains do actually work a *little* like that. Essentially, they have an internal representation of a concept or a relationship between concepts, which is then turned into an external representation (speech, writing, gestures, etc.) to express that concept or relationship to others. Paraphasia and aphasia are believed to be misfires or disconnections in this token-to-language mapping. This is also believed to be why aphasia affects only the ability to use language, but not intelligence.
Of course, our brains are far more complicated than just their language centers, and the language centers are definitely more efficient than LLMs (both in training and in use).