@jfballenger i’m puzzled that the people who invented ChatGPT were unable to write a program that required the AI to only use factual information instead of making up nonsense.
And, once the reputation of the company was hit hard by the fact that its AI regularly produced total BS, that they still haven’t fixed it, and it’s still making waves
@tolortslubor @jfballenger Well, the technology inherently has nothing to do with factual information. The core of it is not a chatbot, or a library of knowledge. It does not have any well-defined or measurable/punishable way to "know" anything. It's a word prediction algorithm. Like what's at the top of mobile keyboards. It just happens to be fed a lot of data and trained with supercomputers, and uses some clever tricks to be really good at this.
They probably could try to make something to write factual information, but ultimately that's not what a "large language model" does. The goal is to make coherent language, not accurate information. And this coherent language still isn't perfect (though, it's really good)
The fact that it has genuinely accurate "knowledge" is kind of just a coincidence. Really, everything it says is made up, but factual statements really are more common in the wild than nonsense statements.
@jfballenger @marick: Nice!
I will be interested to see where this goes. Right now it feels like when Wikipedia first started regarding using it for actual information.
Given the premise behind Wikipedia and how these LLMs get their information…??