@drahardja I can tell these LLM and LRM-based AI algorithms are still extremely limited, and nothing at all like a so-called “general intelligence” because they still can’t write a program in languages like Haskell, Lisp, APL, or Forth. Why can humans learn these languages in weeks but the AI cannot learn them no matter how much energy we burn off trying to teach them?
The reason is obviously because of the training datasets are made by humans who already know these things. So the training data that AI is using are so heavily biased toward Python, JavaScript, C, and C++ (the languages used to build these AI’s, what a surprise!) and the LLM being 100% statistical in nature, will never be able to synthesize new ideas about things it has never learned from it’s training data.
The limitations of LLMs and LRMs are so obvious to anyone who has experience in the field, there is no way in hell these systems could replace people. Anyone who thinks AI could replace people at this point is just plain stupid, or outright lying. With Sam Altman, I am guessing he is more of a moron than a liar, he seems to have convinced himself that he is a genius so thoroughly that he can fool other wealthy people, and credulous, sycophantic journalists, into also thinking that he is a genius. But he seems to me more like a moron who doesn’t even realize he is lying, that is probably why he is so good at convincing people of his bullshit whenever he talks.
I hear about governments now talking about passing initiatives to “improve AI literacy,” but they then let guys like Sam Altman define what “AI literacy” even means, and (surprise!) he ends up defining “AI literacy” as diverting tax money to his corporation for integration into government institutions, and teaching school children how to become completely dependent on the products and services he sells.
I maintain that LLMs are actually very useful if they are used in very limited ways, to make computers easier for people to use (e.g. as auto-completion tools), which in my experience LLMs are a very good tool to use for that purpose.
So what AI literacy should mean is that AI should not be used to create content for you, and it sure as hell should not be used to think for you. Literacy means understanding how these AIs are trained from data made by humans who know what they are talking about, and that the training data is most useful to the AI if the humans who created it wrote it for other humans to read. AI literacy means understanding that to really learn something, you have to solve problems for yourself — you can’t just ask an AI to do it for you, you won’t learn anything that way. AI literacy means understanding that if you are using AI to think for you, you are doing something very dangerous, possibly even deadly, especially if the AI is making decisions for you where your decisions can effect the lives of other people.
@brahms @Laird_Dave @stooovie @sklrmths @fanf42
#tech #AI #SamAltman #LLM #LRM #AILiteracy #ProgrammingLanguages #ComputerProgramming