AI Is a Total Grift
AI Is a Total Grift
A machine learning suite that spends hour after hour screening trillions of potentially medically useful molecules = kind of interesting.
A subscription to a chatbot that writes buggy code that has to be meticulously combed over before you dare put it into production, and might wind up appearing in Google search results = awful, but it’s what’s selling for some reason?
The former isn’t “kind of interesting” and there are lots and lots of daily use cases solved by AI that are much much more than “kind of interesting”.
What a simple way to try to downplay it by calling it only kind of interesting.
How does this differ from most other things VCs throw money at?
cough cough crypto cough
But can we at least be thankful that it shifted focus from augmented reality? Prior to AI, the buzz was around things like the metaverse and digital avatars in your teams meetings.
Even crap AI is more useful than avatars in teams.
What’s currently being marketed as AI proves that there’s always someone who can do your job worse for cheaper
I’m just waiting for the “cheaper” part to change. Surely these VC’s will want to see some ROI on the stupid amount of money these hosted models cost. There’s no way the subscription fees being charged cover the actual cost of running the models, so something will have to give eventually
Digital Avatars in teams arent actively destructive to the internet, the environment, and people’s grasp on reality.
I think you’re universalising a personal grievance, without fully accounting for the impacts of Metaverse bullshit, which was never practical or feasible to begin with, and the AI Apocalypse sweeping the internet
Well, I was trying to bring a little humor to the conversation by just saying at least as a silver lining is that this other stupid crap is gone now.
If the AI “revolution” never came, I bet a thread just like this one would exist for metaverse or whatever saying how it’s destroying the internet. And think about it, entering an entire world just to hold this conversation where all users are known and conversations recorded…kind of like AI scraping.
You can see his it could get just as bad or worse. Hint: its not the technology that’s the problem, its the companies behind them - those wouldn’t be any different.
I’m not trying to downplay AI, I’m just being realistic of the world we live in and trying to not be so doom and gloom every second of the day.
“Use our AI!”
“Hmm… I don’t know.”
“If you use AI you can fire all your employees 🤞 .”
“GIMMEE! GIMMEE! I’LL PAY ANY PRICE! I HATE EMPLOYEES SOO MUCH!”
two.months later
“Why is everything broken?”
Well not exactly but completely misunderstood.
Everyone who actually knows about AI is familiar with the alignment and takeoff problems.
(Play this if you need a quick summary
www.decisionproblem.com/paperclips/index2.html
)
So whenever someone says, we are making AI, the response should be “oh fuck no” (using bullets and fire if required)
New tagging and auto-completion is fine (there is probably a whole space of new tools that can come out of the AI research field; that doesn’t risk human extinction)
We are so far away from a paperclip maximizer scenario that I can’t take anyone concerned about that seriously.
We have nothing even approaching true reasoning, despite all the misuse going on that would indicate otherwise.
Alignment? Takeoff? None of our current technologies under the AI moniker come anywhere remotely close to any reason for concern, and most signs point to us rapidly approaching a wall with our current approaches. Each new version from the top companies in the space right now has less and less advancement in capability compared to the last.
I think we are talking past each other. Alignment with human values is important; otherwise we end up with a paper clip optimizer wanting humans only as a feedstock of atoms or deciding to pull a “With Folded Hands” situation.
None of the “AI” companies are even remotely interested in or working on this legitimate concern.
The worry about “Alignment” and such is mostly a TESCREAL talking point (look it up if you don’t know what that is, I promise you’ll understand a lot of things about the AI industry).
It’s ridiculous at best, and a harmful and delirious distraction at worst.
If it can’t grow by itself, it is not general purpose artificial intelligence. It would be an overly complicated elevator control system and making its behavior deterministic and simple to reason about would enable it to be used to solve problems in industrial processes safely.
Think SHRDLU.
Unrelated, but what’s the difference between grift vs. scam? Internet search seems to give me the same definitions.
Is it just that grifts are personal, while scams are impersonal (like phone/internet scams)?
When I think of a scam, I think a one-off, obviously amateur attempt. An email with awful grammar saying the government will fine me a bajillion dollars if I don’t download a file is a scam. A scam will also leave you alone.
A grift is done by career slimeballs. Used car salesmen, big C-suites and corrupt politicians are grifters. It’s more offensive and more aggressive. You can’t escape a grift.
Not sure of an official difference, but my take is a grift is something that everyone’s kind of doing on the DL, but nobody is admitting that it’s a scam.
Think like a cult. Everyone’s a part of the cult, but nobody actually wants to believe they’re getting scammed or scamming others, so it’s more of a grift. People assume what they’re doing can’t last/sustain, but they do it anyway because the benefits are good.
A scam is straight up the party knowing it’s illegitimate and going out of their way to execute the scam so they can benefit at the expense of others.
Basically, I’ve always taken it as one is self aware (scam) and one is only self-aware at the top levels (grift).
But this is all just in my head.
en.wikipedia.org/wiki/Tay_(chatbot)
You’re thinking it would require effort or coordination on the part of real people, instead of it being default behaviour for some