My students are often surprised to learn that LLMs aren’t answering their questions. Rather, an LLM answers the question “what would a reply to this look like?” It’s one of the first things I explain in the “Should I use LLMs?” portion of my syllabus.
@mcnees
Thanks for sharing.
Sums up how LLM is designed and the discrepancy to the marketing.
@mcnees but isnt that the same with (some) Humans ?
@ulli Exactly.
@torstentorsten @ulli so... whats the difference?

@EtherealResonance Who can tell? (Certainly not me)

But how would YOU spot and tell the difference between a reply and an uttering that looks like a reply?

@torstentorsten can't.

What feels scary is that chatgpt can do lots of math 80% of the time very accurate. Mostly with help of creating its own python code and letting it run.

I find that alone very crazy

@ulli @mcnees The problem is people tend to trust LLM answers uncritically. Not usually so with most human responses.

(Well, with the exception of current american politics…)

@reiterator @mcnees people vote Trump exactly because of that.

@ulli

Re: "but isnt that the same with (some) Humans ?"

Kind of, yes. There are probably humans in every field who can fake the ambience of knowledge well enough to fool other humans who _don't_ know the field.

But it isn't a brilliant idea to go to a _human_ bullshitter for advice either :-)

The difference isn't that LLMs can produce plausible-sounding bullshit and humans can't. Both can.

It's more like, most people already _know_ that some confident-bullshitter bloke in the pub may not be reliable in explaining their physics homework :-)

(or providing case law for their legal case, or telling them which mushrooms are safe to eat.)

The way LLMs have been sold as "intelligent", it might not be quite so obvious at first that they don't actually know what they're talking about - and that whether their answers are right or not is a roll of the dice. That's why it's worth explaining.

@mcnees

#LLMs #bullshit

@unchartedworlds @mcnees Thats not my Point. Most people we meet want to mislead us in some way. Almost everyone today is out for their own advantage. LLMs are a catalyst. They are an really good interface to interact with Humans. Without enougth information, they misinform. But also that leads in my POV to an learning. And maybe faster than other ways...

@ulli

"Without enougth information, they misinform."

This sentence implies that there's an "enough information" which could stop LLMs from misinforming people. But that isn't the case. Correct or incorrect information isn't the basis on which they function.

@unchartedworlds Thats wrong. Summarizing information and extracting Information from texts is what they can.

@ulli

Is your argument that limiting its task to "summarise this specific text" means it will have "enough" information and won't get anything wrong?

@ulli

Hmm interesting. I don't think I would ever entirely trust the summary of an LLM, but then I would retain some scepticism about a summary from most humans too.

I don't think "They are an really good interface to interact with Humans" though. Not currently. For that to be the case, the average human would have to have a significantly better understanding of the limits of what an LLM can and can't do. Otherwise, the "learning" you refer to is going to produce a lot of damage along the way.

@mcnees

I tend to explain it as "What would a reply that a significant amount of the students in this room would by a good chance not reject because it sounds plausible look like".

@mcnees you’re up against it, trying to develop brains and AI doesn’t wire those pathways for you. And yet, it has to be said that humans apply exactly the same approach to answering questions. It’s probably better to tell them school is a brain gym…
@mcnees of course, once you get into management, you would have other humans do the work for you anyway so then it’s fine to ask an AI, it just speeds things up. Is the real question then, how do we develop good brains to be good managers?

@rpin42 Humans might sometimes resemble this process, but it is not at all accurate to say we apply “exactly the same approach” because we plainly do not. We can remember facts. We can detect inconsistencies. We can detect and ignore superfluous information. An LLM cannot do any of these things.

Sometimes the output looks like they do these things. But the fact that the output looks like the output of thinking doesn’t mean it was the result of thinking, or even the result of a process analogous to thinking. We think. It doesn’t.

@mcnees

@paco @mcnees well, I am a bit of an outlier in the way my brain works…

@mcnees Nature today linked to an interesting article and good analogy — "Think about it like a multiple-choice test. If you do not know the answer but take a wild guess, you might get lucky and be right. Leaving it blank guarantees a zero."

Some time ago (I do not remember the source) I have read about an interesting teaching approach. The assignment was to use LLM for a given project and then to discuss where the LLM was wrong.

https://www.nature.com/articles/d41586-025-02853-8

https://openai.com/index/why-language-models-hallucinate/

Can researchers stop AI making up citations?

OpenAI’s GPT-5 hallucinates less than previous models do, but cutting hallucination completely might prove impossible.

@mcnees In my opinion a meaningless distinction. Answering a question, for human or bot, is the act of determining what a reply to this would look like (best case scenario). Both human and bot are capable of generating something that sounds like a reasonable reply but is wrong. To see this in action, ask a toddler to explain "why" and you'll get confabulations about "because."
@escarpment @mcnees why did you choose those words, rather than other words that had a different meaning but would also look like a valid reply?

FWIW what the LLM actually said in response was "That’s a sharp and important framing [...blah blah blah]".
@adam @mcnees I chose those words because that's what the determinstic system of my brain and fingers landed on at that point in time based on the prompt.
@mcnees had to think of your recent case @DanielleVossebeld. Could a piece of text on working and effect of LLM / AI on students development help a bit in behavioural change? Raise awareness why 'cheating' is not beneficial?
@Frieke72 @mcnees @DanielleVossebeld I think that assumes that most students attend college to learn stuff. In my experience this is not true: many attend to get a piece of paper that qualifies them to do a job that gets them money and/or prestige. Computer science is probably one of the worse fields for this.
@Frieke72 @DanielleVossebeld @mcnees @fd93 yeah, this isn’t true.
@DrSuzanne @Frieke72 @DanielleVossebeld @mcnees I could be wrong / outdated but Times Higher Education was full of articles about the tension between "market-readiness" vs a humanistic education when I worked at the University of Westminster. And before that it was so common that "doing it for the CV" was a running joke at University of Warwick on my BA.
@DrSuzanne @Frieke72 @DanielleVossebeld @mcnees There is probably an education / sociology study somewhere about the motivations of students for attending college in 2025 and whether the diploma or the knowledge is more important to them though.
@mcnees While I agree thats important to keep in mind, what is not clear is the degree to which people also answer questions that way. LLMs are definitely not what we believe intelligence to be, but could it be that that belief is incorrect?

@scottfweintraub @mcnees >> what is not clear is the degree to which people also answer questions that way.

Yes, it is. They don’t.

>> LLMs are definitely not what we believe intelligence to be, but could it be that that belief is incorrect?

No.

@MisuseCase @scottfweintraub @mcnees Thats not very evidence based.
Honestly I’m pretty skeptical about llms myself, but Im also no longer convinced we understand much about our own intelligence.

@scottfweintraub @mcnees LLMs assign “tokens” to words and work on a kind of map of which tokens are associated with each other, in what sequence. But they don’t “know” what the words mean. They’re not even words, just tokens.

Humans don’t work like this.

@MisuseCase @scottfweintraub @mcnees We like to think we dont, and most likely the central voice that is “me” does not, but what about all the helpers?

@MisuseCase @scottfweintraub @mcnees Current research suggests our brains do actually work a *little* like that. Essentially, they have an internal representation of a concept or a relationship between concepts, which is then turned into an external representation (speech, writing, gestures, etc.) to express that concept or relationship to others. Paraphasia and aphasia are believed to be misfires or disconnections in this token-to-language mapping. This is also believed to be why aphasia affects only the ability to use language, but not intelligence.

Of course, our brains are far more complicated than just their language centers, and the language centers are definitely more efficient than LLMs (both in training and in use).

@MisuseCase
@scottfweintraub @mcnees I think it depends on the context and the human. I know some people that will BS an answer to not look bad. Sometimes they're right. I joke that someone I know who regularly does that is an LLM.

Generally, no I don't believe this is how humans think, even if it may mimic one mode we use sometimes. But I do believe there is a lot to learn about ourselves from how we are reflected in the machine.

@MisuseCase

I'm not sure that's entirely the case. I had a... chaotic childhood, and there was definitely a period where I was, especially under stress, inclined to give plausible answer-shaped replies for which actual truth was irrelevant. Around this time I had also read a lot of joke books and could confidently land dirty jokes that I had zero knowledge of.

So I suspect the LLM expectation-influenced, consistency-driven glibness is similar to part of how we answer, but it only dominates in pathological conditions (compulsive liars, fabulists, some kinds of illness or brain damage).

@scottfweintraub @mcnees

@williampietri @scottfweintraub @mcnees I would say (and I have said) that LLMs operate like one of Dr. Oliver Sacks’ patients who can convincingly fake having normal cognition for a while but fall apart on close inspection.
@MisuseCase Yes, agreed! But I think Sacks' writing is so compelling because his extremes show us the normally hidden infrastructure.
@scottfweintraub @mcnees

@scottfweintraub @mcnees

I am really getting salty about this kind of comment.

EVERY TIME a discussion about LLMs gets even slightly philosophical someone comes up with this "what if we're really like LLMs" with an implied naughty snigger.

No, LLMs do not build models of reality, the way basically every animal more complex than a sea-slug manages.

@scottfweintraub @mcnees LLMs have no investment in their answer. If you tell them they're wrong, they'll just reroll the dice to try to make you happy. A person wont do that if they know they're right.
@mcnees That is a very nice way of explaining it. Thank you! @malteengeler
@mcnees
Hold on now… my chatbot girlfriend is just like Data? Just a slice of statistical attention along with all the other Wojak’s talking to ChatGPT?

https://youtu.be/m2GZM0b26x0
Data: How NOT to Kiss a Women #shorts #startrek

YouTube
@mcnees Ai is just a fancy way to call Analytics

@mcnees
Yes, LLMs are trained to be convincing, not trained to be correct. When they are, it's by accident.

@briankrebs

Isn't it even more strange and its actually answering "what is the most likely next word"?
And that happens to be an answer to a question.
@mcnees This is a very succinct way of explaining it.
@mcnees I’ve been assigning this mini-course/explanation by Bergstrom and West to my students and it’s been helpful https://thebullshitmachines.com/index.html
Modern-Day Oracles or Bullshit Machines: Introduction

A free online humanities course about how to learn and work and thrive in an AI world.

@emilyh @mcnees This is great stuff. I'm going to assign this to any student that argues the "no AI in your assignments" rule.
@StuWatts @mcnees I wouldn’t say the site supports a total ban but your course your rules.
@emilyh @mcnees It's not a total ban, but when we ask them to not use it (e.g. when completing a quick research task in class), students sometimes push back.
@mcnees yes yes yes yes yes....
LLMs are "synthetic text extrusion machines"
@mcnees - Just another form of AI crap. is it real or is it Memorex
@mcnees
#LLMs are kind of like Dexter, ie, "What would a normal human response "look" like?".
They are *models*, they create a model of a response, not an actual response.

@mcnees @tchambers Imagine an #LLM as an improv actor who’s read every single script ever written. They’re given a scene to act out, and they have to come up with the next line that fits, even though they don’t have a clue about what’s going on.

#AI

@mjg @mcnees @tchambers
So, they're a member of the Democratic leadership ?

@mcnees

Is this from a larger document available online?