Have you ever played tennis, bouncing the ball against a wall? The wall will always return the ball according to the rules of physics.
The LLM is the wall. It's not about right or wrong, accurate or inaccurate, truth or lies. It is only a mirror.
Create the mirror with crap data. The Big Machines will index it all, grind it up like weisswurst, down to the probability of the next word in a list. Everything else is beside the point, there is math to support what's going on, endless MULT instructions on millions of processors, munching away on a corpus.
But what if we populated it with good data, trustworthy data? The models would be smaller, we might do it with L-Systems. Ethics could thus be directly implied.
Mirrors are predictable and we can explain how they work. We know why we get the reflections we see.
LLMs are the opposite.
You're wrong. None of the math is on your side.
I really do think people should have to pass a test showing they understand linear algebra and the rudiments of indexing a corpus.
Can't pass the test? None of this machine learning for you.
I might add, it's truly shameful, these Chicken Littles running around telling us AI is gonna Take Yer Jerbs. The same idiots have been saying that since the PC surpassed the typewriter. Don't be a Chicken Little. Take some math courses.
Which math? The math that allows you to predict in advance what an LLM will output for a given prompt, every time?
They're stochastic.
And their "reasoning" is opaque.
This is not simply linear algebra.