Absolutely! But the boosters are in denial about that, and churn out endless excuses for the LLMs: “Humans make mistakes too!” “You gave it the wrong prompt!” “This will all be fixed in the next iteration!”
@gregeganSF @mattmcirvin @ProfKinyon They call it "hallucinations" which makes it sound like a glitch that just shows up now and then, rather than an LLM's core function. Their one job is "If a response to this prompt appeared in your training data, guess what it would be". It's the same algorithm that produces correct answers and hallucinations.
And (with the caveat that introspection does not tell us how the brain really works): that's not how I think when I understand something. That's how I think when I'm bluffing, when I have to write about something I don't understand.
@robinadams @gregeganSF @ProfKinyon It's possible that you could train this kind of neural network to do better, but it wouldn't be via the "LLM" route of just letting it loose on a giant corpus of data. You might have to actually teach it, like a human--give it some kind of lived experience.
I am not recommending that anyone try this, mind you. But I don't think there will be a lot of effort put into it by the people who are funding this stuff, anyway, because it makes the whole process labor-intensive and obviously unprofitable. We have enough trouble educating humans.
And maybe that wouldn't even work, because it seems like LLMs only get as good as they are by being exposed to a larger corpus than a human being ever encounters. Which implies to me that they're not inherently as good at learning as we are (probably not a surprise, we have an evolutionary head start of many millions of years).