#AI isn’t a threat to humanity, the humans who wield it are.
#ElonMusk
#PhonyStark
#PeterThiel
#TechnoFascism
#Privacy #Surveillance
#SurveillanceCapitalism
#AI #ML #BigData
#BigTech #VC
#SiliconValley
#OpenAI #ChatGPT
#ChatBot #Chatbots
#ArtificialIntelliegnce
#MachineLearning
#ElonMuskIsATroll
#Democracy
#JustSayin’
👇
@gmusser Sigh, explaining why an algorithm decided something is impossible (or at best very hard, even in simple cases) even for the simpler machine learning stuff. That has been one of the weaknesses of the whole field since forever. It predicts stuff like an oracle. It can be superb at that. It does not explain itself.
As people have observed, #ChatGPT does not lie. That would imply that it has a notation of truth, which it simply does not.
It spits out very convincing text.
@gmusser Or to put it simpler, it's a very nice implemention of P(next_token | context). Plus a ton of helpers.
The moment you accept that at its core it's a running probability distribution function, and that probability function does not know about many things. Truth, how the world looks, what books exist (thus the funny references it likes to generate to non-existing stuff).
It only cares what next tokens will make the text most probably sound realistic in the context.
@gmusser Sure, you can add all kind of bell and whistles to this, but at the core it's still what it stays. A predictor which next "tokens" (think word here) will sound most realistic in the given context up till now.
(If that sounds like Autocomplete, it is, just on steroids. Just magnitudes bigger, but the errors stay the same, just less visible, because a) you are not directly interacting at the word level when the text is spit out b) the system is magnitudes more powerful)