My students are often surprised to learn that LLMs aren’t answering their questions. Rather, an LLM answers the question “what would a reply to this look like?” It’s one of the first things I explain in the “Should I use LLMs?” portion of my syllabus.
@mcnees but isnt that the same with (some) Humans ?

@ulli

Re: "but isnt that the same with (some) Humans ?"

Kind of, yes. There are probably humans in every field who can fake the ambience of knowledge well enough to fool other humans who _don't_ know the field.

But it isn't a brilliant idea to go to a _human_ bullshitter for advice either :-)

The difference isn't that LLMs can produce plausible-sounding bullshit and humans can't. Both can.

It's more like, most people already _know_ that some confident-bullshitter bloke in the pub may not be reliable in explaining their physics homework :-)

(or providing case law for their legal case, or telling them which mushrooms are safe to eat.)

The way LLMs have been sold as "intelligent", it might not be quite so obvious at first that they don't actually know what they're talking about - and that whether their answers are right or not is a roll of the dice. That's why it's worth explaining.

@mcnees

#LLMs #bullshit

@unchartedworlds @mcnees Thats not my Point. Most people we meet want to mislead us in some way. Almost everyone today is out for their own advantage. LLMs are a catalyst. They are an really good interface to interact with Humans. Without enougth information, they misinform. But also that leads in my POV to an learning. And maybe faster than other ways...

@ulli

"Without enougth information, they misinform."

This sentence implies that there's an "enough information" which could stop LLMs from misinforming people. But that isn't the case. Correct or incorrect information isn't the basis on which they function.

@unchartedworlds Thats wrong. Summarizing information and extracting Information from texts is what they can.

@ulli

Is your argument that limiting its task to "summarise this specific text" means it will have "enough" information and won't get anything wrong?

@ulli

Hmm interesting. I don't think I would ever entirely trust the summary of an LLM, but then I would retain some scepticism about a summary from most humans too.

I don't think "They are an really good interface to interact with Humans" though. Not currently. For that to be the case, the average human would have to have a significantly better understanding of the limits of what an LLM can and can't do. Otherwise, the "learning" you refer to is going to produce a lot of damage along the way.