Fundamental point that ~all people who see LLMs as "AI" seem to be missing:

The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

#AI #ML #MathyMath #AIHype #NLP #NLProc #Galatica #LaMDA #GPT3 etc etc

The word forms are not their meaning
Their meaning is not the world

Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over...

#AI #ML #MathyMath #AIHype #NLP #NLProc #Galatica #LaMDA #GPT3 etc etc

@emilymbender People apparently don't learn from past attempts to solve problems from words and their assumed meaning alone. It's an irony that what known as #BadMetaphysics for more than 250 years is a 21st century's paradigm blindly trusted by the heirs of #Hume. It's tragic that researchers in the 21st century ignore what many generations of scholars learned in lifetimes wasted on attempts to solve an ill-conceived problem. #MasterAlgorithm
@tg9541 @emilymbender I'm under the impression many academics from my generation (millennials?) were conditioned with that kind of big tech arrogance that makes you think everything before 2000 is obsolete and hopelessly broken. Hence they think we need to start anew and not even bother learning from the mistakes (and successes!) of the past. I find myself commuting this mistake more often than I'd like to admit.
@mc @emilymbender I can relate. It took me more than 50 years to understand that knowledge of the world is a process for all and everyone, and a few more years to admit that I could have been the guy defending the reality of #phlogiston.