@scottfweintraub @mcnees >> what is not clear is the degree to which people also answer questions that way.
Yes, it is. They don’t.
>> LLMs are definitely not what we believe intelligence to be, but could it be that that belief is incorrect?
No.
@scottfweintraub @mcnees LLMs assign “tokens” to words and work on a kind of map of which tokens are associated with each other, in what sequence. But they don’t “know” what the words mean. They’re not even words, just tokens.
Humans don’t work like this.
@MisuseCase @scottfweintraub @mcnees Current research suggests our brains do actually work a *little* like that. Essentially, they have an internal representation of a concept or a relationship between concepts, which is then turned into an external representation (speech, writing, gestures, etc.) to express that concept or relationship to others. Paraphasia and aphasia are believed to be misfires or disconnections in this token-to-language mapping. This is also believed to be why aphasia affects only the ability to use language, but not intelligence.
Of course, our brains are far more complicated than just their language centers, and the language centers are definitely more efficient than LLMs (both in training and in use).
@MisuseCase
@scottfweintraub @mcnees I think it depends on the context and the human. I know some people that will BS an answer to not look bad. Sometimes they're right. I joke that someone I know who regularly does that is an LLM.
Generally, no I don't believe this is how humans think, even if it may mimic one mode we use sometimes. But I do believe there is a lot to learn about ourselves from how we are reflected in the machine.
I'm not sure that's entirely the case. I had a... chaotic childhood, and there was definitely a period where I was, especially under stress, inclined to give plausible answer-shaped replies for which actual truth was irrelevant. Around this time I had also read a lot of joke books and could confidently land dirty jokes that I had zero knowledge of.
So I suspect the LLM expectation-influenced, consistency-driven glibness is similar to part of how we answer, but it only dominates in pathological conditions (compulsive liars, fabulists, some kinds of illness or brain damage).
I am really getting salty about this kind of comment.
EVERY TIME a discussion about LLMs gets even slightly philosophical someone comes up with this "what if we're really like LLMs" with an implied naughty snigger.
No, LLMs do not build models of reality, the way basically every animal more complex than a sea-slug manages.