@EtherealResonance Who can tell? (Certainly not me)
But how would YOU spot and tell the difference between a reply and an uttering that looks like a reply?
@torstentorsten can't.
What feels scary is that chatgpt can do lots of math 80% of the time very accurate. Mostly with help of creating its own python code and letting it run.
I find that alone very crazy
Re: "but isnt that the same with (some) Humans ?"
Kind of, yes. There are probably humans in every field who can fake the ambience of knowledge well enough to fool other humans who _don't_ know the field.
But it isn't a brilliant idea to go to a _human_ bullshitter for advice either :-)
The difference isn't that LLMs can produce plausible-sounding bullshit and humans can't. Both can.
It's more like, most people already _know_ that some confident-bullshitter bloke in the pub may not be reliable in explaining their physics homework :-)
(or providing case law for their legal case, or telling them which mushrooms are safe to eat.)
The way LLMs have been sold as "intelligent", it might not be quite so obvious at first that they don't actually know what they're talking about - and that whether their answers are right or not is a roll of the dice. That's why it's worth explaining.
"Without enougth information, they misinform."
This sentence implies that there's an "enough information" which could stop LLMs from misinforming people. But that isn't the case. Correct or incorrect information isn't the basis on which they function.
Is your argument that limiting its task to "summarise this specific text" means it will have "enough" information and won't get anything wrong?
Hmm interesting. I don't think I would ever entirely trust the summary of an LLM, but then I would retain some scepticism about a summary from most humans too.
I don't think "They are an really good interface to interact with Humans" though. Not currently. For that to be the case, the average human would have to have a significantly better understanding of the limits of what an LLM can and can't do. Otherwise, the "learning" you refer to is going to produce a lot of damage along the way.