@cap_ybarra @davidgerard There's a wide swath of techies that succumb to the allure of "usefulness". Crypto is easy for them to see through, because it has no fundamental use case that justifies its expense.
LLMs, on the other hand, have many apparent uses. Not only that, but techies love using tech to solve non-tech problems--often because they don't understand those problems, so assume the solution can be easily solved with tech. LLMs present themselves as the "anything solution", and it can badly do so many things (but: it does do them) that many techies just point at all the things it can do and go, "See how useful it is!"
LLMs are Dunning-Kruger wet dreams.
And what I'm learning is, so, so, SO many professional programmers are actually shitty programmers that can't even tell the difference between error-ridden slop and maintainable code.
@Azuaron @cap_ybarra @davidgerard "SO many professional programmers are actually shitty programmers that can't even tell the difference between error-ridden slop and maintainable code."
This also explains the existence of nodeJS and every js-frontend framework out there.
(Only half joking, i think the causality goes the other way: "everyone uses react, must be good programmming skill" -> everyone measures their skill on crap -> machines produce the crap -> machine output is great!
@davidgerard @jplebreton crypto bros are about the hustle. GenAI is the current hustle.
Same group of guys will move to whatever the next big thing is. If there is no next big thing when GenAI investment collapses, they'll be the guys either jumping out windows or consolidating their money depending on whether they were able to sell at a high or not
The amount of damage done by techbros convincing themselves that they are very smart, while being the dumbest and most credible people alive for any sort of buzz that vaguely promises to tell them they're special, has been catastrophic for society as a whole.
I think it is partly to do with the brainrot from years of seeing "tech advance quickly" even the anti-AI people were early on mostly assuming that this tech was legit gonna be replacing artists etc.
I must admit I have myself become disabused of this notion now. We are long past the days of seeing video game graphics and mobile phones advance every year. Some technologies are just dead ends or cap out early on like fridges. Turns out LLMs are a dead end tech and the corpos are just horny to replace us with them.
@chrysanthos @davidgerard It has never made sense to me that some people think it could continue long-term or was sustainable. But even when tech was still advancing quickly, it caused many problems like a major increase in consumerism and transformed us into a throw-away society. It made so many people not appreciate what they had, and instead had them constantly needing the next thing. And advancing hardware made software get lazy and inefficient, where overcoming that with brute force (new hardware) just killed the environment with old e-waste and ever-increasing power usage of newer stuff.
Also, I'm not sure why anyone ever believed that there could be a desire for AI to replace the arts :/ or to replace anyone or anything, really.
It has niche use cases that are truly beneficial but that's not what's driving it.
If you want pervasive surveillance and autonomous systems where the inaccuracy is considered a benefit and a sledgehammer with which bludgeon your creative talent into submission or poverty, this is the tool for you!
Damn the environmental impact!
Money-motivation vs Laziness. Fewer of the former, more of the latter.
I've been wondering about this. I think it's because machine learning (in the broad sense, including other technologies like DL) is fun. It's really technically interesting to implement ML algorithms to approach problems, and even more interesting to try to solve the problems those ML algorithms cause.
Contrast this with blockchains, which are pretty boring from a technical point of view.
A large part of why I write code for a living is that it's fun. Non-technical people might not understand this, but designing code (especially optimising code) is basically a puzzle video game.
So I can see why people with brains like mine might get swept into wanting to believe that the fun project to hack on is also going to be worldchanging.
I think this would be true if people were writing LLMs, I know one person who started writing his own to run locally. Not for me but I can see where that comes from.
Using LLMs as a service just does not fall into this for me though, about as far from profiling and performance optimisation (which I like and agree can be puzzle game like) as I can think of.
Yeah this is a fair point and I think beats my point.
@davidgerard
No “100% local «AI»” guy has shown me his model.
@atax1a @katrinatransfem @davidgerard
Look how smart I am! See, I paste my horrible ideas into the head-patting machine, and it tells me I'm right and a genius and that the idea will totally work. You should try getting your head patted by the head-patting machine too.
@europlus @katrinatransfem @atax1a @davidgerard
*** ELIZA ***
Original code by Weizenbaum, 1966
To stop Eliza, type 'BYE'
SAY, DO YOU HAVE ANY PSYCHOLOGICAL PROBLEMS?
~*~ the benchmarks ~*~
@davidgerard @katrinatransfem there is something fishy about the most recent influencer campaign. They even got Knuth to mention a specific product by name.
It's like they did psychological profiling on individuals, and then give them exactly what they need to praise them, and not their competitor (and of course push the concept as a whole bit only secondarily).