Linus Torvalds reckons AI is ‘90% marketing and 10% reality’

https://lemmy.world/post/21370859

Linus Torvalds reckons AI is ‘90% marketing and 10% reality’ - Lemmy.World

So basically just like linux. Except linux has no marketing…So 10% reality, and 90% uhhhhhhhhhh…
What
Some Linux bad Windows good troll
Did I fall into a 1999 Slashdot comment section somehow?
That says more about your ignorance than anything about AI or Linux.
90% angry nerds fighting each other over what answer is “right”
Never heard of Android I guess?

So basically just like linux. Except linux has no marketing

Except for the most popular OS on the Internet, of course.

You’re aware Linux basically runs the Internet, right?

You’re aware Linux basically runs the Internet World, right?

Billions of devices run Linux. It is an amazing feat!

What happened to Linus? He looks so old now…
He got old.
Not especially old, though; he looks like a 54yo dev. Reminds me of my uncles when they were 54yo devs.
As a 46 year old dev I’m starting to look that way too.
I guess having 3 kids will do that to you.
That, and developing software for 30+ years.

That and leading an open source project for 30 years.

THE open source project.

Whether you’re leading a project or not, time will have pretty much the same impact. He’s in his mid-50s, and he looks pretty good for that age.
I mean he’s aging quite well given his position… Many people burn out way earlier.
[citation needed]^/s^
That’s an excessive amount of aging is what folks are seeing. Not that he’s just old.
He’s 54, I think he looks pretty average for that age. He looks like an old dad, because he is.
I told him not to go to that beach.
What happened to he is happening now to you.
Wow, yeah that’s a big difference
People age. You don’t look the same as in 2010 either, I know that without having any idea what you look like.
He has a real Michael McKean vibe
It’s like he aged 10 years in the past 2 years… damn
Oxidative stress is a bitch
If you find out what happened, let me know, because I think it’s happening to me too.

Honestly, he’s wrong though.

I know tons of full stack developers who use AI to GREATLY speed up their workflow. I’ve used AI image generators to put something I wanted into the concept stage before I paid an artist to do the work with the revisions I wanted that I couldn’t get AI to produce properly.

And first and foremost, they’re a great use in surfacing information that is discussed and available, but might be buried with no SEO behind it to surface it. They are terrible at deducing things themselves, because they can’t ‘think’, or coming up with solutions that others haven’t already - but so long as people are aware of those limitations, then they’re a pretty good tool to have.

AI can give me a blueprint for my logic. Then I, as a developer, make the code run. Cuts my scripting time in half.

Rofl. As a developer of nearly 20 years, lol.

I used copilot until finally getting fed up last week and turning it off. It was a net negative to my productivity.

Sure, when you’re doing repetitive operations that are mostly copy paste and changing names, it’s pretty decent. It can save dozens of seconds, maybe even a minute or two. That’s great and a welcome assist, even if I have to correct minor things around 50% of the time.

But when an error slips through and I end up spending 20 minutes tracking down the problem later, all that saved time vanishes.

And then the other times where my IDE is frozen because the plugin is stuck in some loop and eating every last resource and I spend the next 20 minutes cursing and killing processes, manually looking for recent updates that hadn’t yet triggered update notifications, etc… well, now we’re in the red, AND I’m pissed off.

So no, AI is not some huge boon to developer productivity. Maybe it’s more useful to junior developers in the short term, but I have definitely dealt with more than a few problems that seem to derive from juniors taking AI answers and not understanding the details enough to catch the problems it introduced. And if juniors frequently rely on AI without gaining deep understanding, we’re going to have worse and worse engineers as a result.

ah yes it’s reactionary to checks notes not support the righteous biggest bubble since dotcom era

you okay out there bud?

You might want to look up the definition of reactionary. Because that’s…exactly what it means. To oppose reform/advancements.
Opposing actual fraud isn’t what reactionary means.
You’ve got a pretty high bar of proof for proving “actual fraud”…
It’s not remotely within the realm of plausibility that Adam Altman genuinely believes any of the horseshit he spews. (And that’s ignoring that they gained their funding by lying about the core intent of their organization by pretending to be serving the public interest and not profiteering.)

It’s not remotely within the realm of plausibility that Sam Altman genuinely believes any of the horseshit he spews.

Welcome to earth.

How’s he wrong?

Did you actually listen to what he said or are you just reading the headline and making it fit another narrative to respond to?

Because he also said he thinks it’s going to change the world, he just hates the marketing BS that’s overhyping it.

Probably because, as anyone who’s actually used AI knows, it has some core weaknesses. But the marketers are happy to gloss over that lie and just say that it will be able to do nearly anything.

It will be interesting when the bubble pops, because that’s probably when we’ll see the useful things it is actually good at
Which is how new technologies tend to go see what sticks after exploring what is possible. So it shouldn’t be surprising that ai is goong through the motions, but it is getting annoying how fast it is ruining functioning systems by being jammed in with no guardrails.
Summarizing documents, writing documents you don’t want to (within reason), and… whatever the hell Neuro-sama is doing on Vedal’s channel, are like the only ones i’ve found so far that kind of work. And I guess image generation.

It’s amazingly good at moderating user content to flag for moderator review. Existing text analysis completely falls down beyond keyword filtering tbh.

It’s really good at sentiment analysis. Which is great for things like user reviews. The Amazon ai notes on products are actually brilliant at summarizing the pros and cons of a product. I work for a holiday let company and we experimented with using it to find customers we need to follow up with and the results were amazing.

It smashes other automated translating services as well.

I use it a lot as a programmer to very quickly learn new topics. Also as an interactive docs that you can ask follow up questions to. I can pick up a new language as I go much faster than with traditional resources.

It’s honestly a complete game changer.

It’s honestly a complete game changer.

It is, both in good and bad ways. The problem, as Linus and others here are pointing out, is that marketing pushes the good and downplays/ignores the bad, so there’s going to be a rough adjustment period as people eventually see through the BS and find the issues, and the longer that takes, the harder things will crash.

There are plenty of good uses of modern AI approaches, they’re just far fewer than the ones being marketed these days.

The one place where I sincerely hope it takes root and succeeds is in medicine. Having better drugs, helping to identify potential problems or diseases, identifying health patterns (all with human review and proper trials, naturally)…

It’s not even close to the magical AGI that tech bros are promising, but it is good at digesting data, and science and medicine are full of that. Plus, given how overworked doctors and nurses can be, having a preliminary analysis from a computer that doesn’t get tired or overworked seems like it would probably help with accurate diagnosis.

But, it also means we get Sam Altman as the next Elon Musk if he cashes in before the pop. And whatever other tech bros do the same. More filthy-rich men with the emotional maturity of a 12 year old.

Speaking as someone who worked on AI, and is a fervent (local) AI enthusiast… it’s 90% marketing and hype, at least.

These things are tools, they spit out tons of garbage, they basically can’t be used for anything where the output could likely be confidently wrong, and the way they’re trained is still morally dubious at best/ And the corporate API business model of “stifle innovation so we can hold our monopoly then squeeze users” is hellish.

As you pointed out, generative AI is a fantastic tool, but it is a TOOL, that needs some massive changes and improvements, wrapped up in hype that gives it a bad name… I drank some of the kool-aid too when llama 1 came out, but you have to look at the market and see how much fud and nonsense is flying around.

As another (local) AI enthusiast I think the point where AI goes from “great” to “just hype” is when it’s expected to generate the correct response, image, etc on the first try.

For example, telling an AI to generate a dozen images from a prompt then picking a good one or re-working the prompt a few times to get what you want. That works fantastically well 90% of the time (assuming you’re generating something it has been trained on).

Expecting AI to respond with the correct answer when given a query > 50% of the time or expecting it not to get it dangerously wing? Hype. 100% hype.

It’ll be a number of years before AI is trustworthy enough not to hallucinate bullshit or generate the exact image you want on the first try.

And he things its great at, namely brainstorming, fiction making, a unreliable intern-like but very fast assistant and so on, get overshadowed by OpenAI and such trying to sell it as an omiscient chatbot and (most profitablly) an employee replacement.
Let me guess. Dumped by an art girl and anxious about the $600 you invested?
If you are just blatantly copying art, well yeah you’re stealing it.

I know tons of full stack developers who use AI to GREATLY speed up their workflow.

cio.com/…/devs-gaining-little-if-anything-from-ai…

Devs gaining little (if anything) from AI coding assistants

Code analysis firm sees no major benefits from AI dev tool when measuring key programming metrics, though others report incremental gains from coding copilots with emphasis on code review.

CIO
How dare you bring sources into this opinion!
Half of the people here linus included must have never use stable diffusion