My students are often surprised to learn that LLMs aren’t answering their questions. Rather, an LLM answers the question “what would a reply to this look like?” It’s one of the first things I explain in the “Should I use LLMs?” portion of my syllabus.
@mcnees While I agree thats important to keep in mind, what is not clear is the degree to which people also answer questions that way. LLMs are definitely not what we believe intelligence to be, but could it be that that belief is incorrect?

@scottfweintraub @mcnees >> what is not clear is the degree to which people also answer questions that way.

Yes, it is. They don’t.

>> LLMs are definitely not what we believe intelligence to be, but could it be that that belief is incorrect?

No.

@MisuseCase @scottfweintraub @mcnees Thats not very evidence based.
Honestly I’m pretty skeptical about llms myself, but Im also no longer convinced we understand much about our own intelligence.

@scottfweintraub @mcnees LLMs assign “tokens” to words and work on a kind of map of which tokens are associated with each other, in what sequence. But they don’t “know” what the words mean. They’re not even words, just tokens.

Humans don’t work like this.

@MisuseCase @scottfweintraub @mcnees We like to think we dont, and most likely the central voice that is “me” does not, but what about all the helpers?

@MisuseCase @scottfweintraub @mcnees Current research suggests our brains do actually work a *little* like that. Essentially, they have an internal representation of a concept or a relationship between concepts, which is then turned into an external representation (speech, writing, gestures, etc.) to express that concept or relationship to others. Paraphasia and aphasia are believed to be misfires or disconnections in this token-to-language mapping. This is also believed to be why aphasia affects only the ability to use language, but not intelligence.

Of course, our brains are far more complicated than just their language centers, and the language centers are definitely more efficient than LLMs (both in training and in use).

@MisuseCase
@scottfweintraub @mcnees I think it depends on the context and the human. I know some people that will BS an answer to not look bad. Sometimes they're right. I joke that someone I know who regularly does that is an LLM.

Generally, no I don't believe this is how humans think, even if it may mimic one mode we use sometimes. But I do believe there is a lot to learn about ourselves from how we are reflected in the machine.

@MisuseCase

I'm not sure that's entirely the case. I had a... chaotic childhood, and there was definitely a period where I was, especially under stress, inclined to give plausible answer-shaped replies for which actual truth was irrelevant. Around this time I had also read a lot of joke books and could confidently land dirty jokes that I had zero knowledge of.

So I suspect the LLM expectation-influenced, consistency-driven glibness is similar to part of how we answer, but it only dominates in pathological conditions (compulsive liars, fabulists, some kinds of illness or brain damage).

@scottfweintraub @mcnees

@williampietri @scottfweintraub @mcnees I would say (and I have said) that LLMs operate like one of Dr. Oliver Sacks’ patients who can convincingly fake having normal cognition for a while but fall apart on close inspection.
@MisuseCase Yes, agreed! But I think Sacks' writing is so compelling because his extremes show us the normally hidden infrastructure.
@scottfweintraub @mcnees

@scottfweintraub @mcnees

I am really getting salty about this kind of comment.

EVERY TIME a discussion about LLMs gets even slightly philosophical someone comes up with this "what if we're really like LLMs" with an implied naughty snigger.

No, LLMs do not build models of reality, the way basically every animal more complex than a sea-slug manages.

@scottfweintraub @mcnees LLMs have no investment in their answer. If you tell them they're wrong, they'll just reroll the dice to try to make you happy. A person wont do that if they know they're right.