@davidgerard I think this is where a lot of the unaccepted hype of the various LLM platforms have muddied the waters.

they've done such a job of linking any form of machine learning with their own slop machines that even though they're not the same thing, in normal people's head's they've become conflated.

so when we hear about machine learning making some advancement the various AI-bros leap on it as justification for their own slop even though the connection in practical senses is tenuous.

@davidgerard just because your shit barn is also made out of bricks doesn't mean its the same thing as Notredam cathedral.

And just because some scientists used machine learning to work out protein properties doesn't mean your useless LLM deserves any of the credit.

@davidgerard so it has some good uses is essentially "I heard someone used AI to do something good"

when it practice it wasn't AI in how we're being sold it at all... and it's frustrating as all hell when it's clearly messing with everything.

@8bitarcher I've heard so many times random joes pretending to be experts, defending LLM companies by claiming "AI finds new drugs to cure lethal diseases" I'm even tired to explain to them, that no, LLM are NOT used in scientific research and no, researchers using purpose-specific, NON-LLM, machine learning to some extent isn't computers and big tech megacorpos magically "finding cures" and saving people's lives…

These people are full of bad faith and don't care about facts…

@davidgerard

@davidgerard But they aren't talking about an organised arson ring burning houses, are they?

Cars kill people on a massive scale, ban them all, now!

@jtb @davidgerard You may want to look up what a simile is.

@davidgerard We should consider one thing: if one calls something AI, it is mostly like that's the tool the arson group uses, useful things often aren't called AI.

Even if they share the same mathematical principles! Take good old machine translation: great to make people understand each other, yet uses the same transformer model as LLMs. No one calls it AI (unless there is a shitty LLM inside).

How about speech recognition? It is awesome for accessibility yet often has the same ethical issues as generative AI: many models are trained on stolen work from humans who never subtitled things wanting their work to be used by big AI companies, many volunteers even.

@qgustavor @davidgerard

I don't want to be 'that guy' and I'm not.

But there is one thing I use that is AI based, and isn't ripping anyone off - noise/wind reduction.

Sure when it's music extraction, that can mean training on unpaid work/stems. But in that case, the wind isn't going to be ripped off for royalties.

Not sure whether Whisper was trained on legal or dodgy sources, but AI transcription has really helped with accessibility, so it's a hard one there...it's shitty if people have had their work stolen though.

But it is niche, and a lot of AI IS exploitative, but there are a few examples of tools where I don't see a down side.

Unlike GenAI, which is genuinely a massive problem and needs to die in a fire.

@radioclash @davidgerard
I use Whisper daily and I'm sure it was: I set my computer to transcribe every audio message I get, mostly due to accessibility issues (like when people prefer watching with subtitles, it's easier to understand), but when it transcribes silence or noise it returns things like credits (like "subtitles by example.org" or "transcribed and translated by Jonh Doe"). It mostly happens on my aunt's audios, since she whispers a lot, the model confuses her audios with movie credits.

@qgustavor @davidgerard yeah that is suspicious when it hallucinates, makes me wonder what it was trained on...likely a whole load of amateur subtitles or even commercial ones.

Unlikely OpenAI got permission. This is when it gets tricky over net good.

Cos it helps a load of people - but those who worked on the subtitles should get credit and some of those AI billions?

@qgustavor @davidgerard I stopped using Whisper and now use whatever Capcut or Resolve uses.

Not sure if it's the same.

@radioclash @davidgerard I don't use it for subtitling. I tried using it once, to be fair, I don't even remember if the results were good or not. I guess they weren't, at least for the kind of content I was working with.
@qgustavor @davidgerard it's not bad, better than the Capcut internal one. Davinci Resolve's subtitling is accurate and easier to deal with.

@davidgerard that's one of the things that pisses me off about the "AI Revolution."

I genuinely believe that AI and LLMs are powerful tools that could be used to better humanity. Already there's some incredible work being done in the Cybersecurity arena with AI, for example.

But all of the worst people, with the worst agendas, have been handed the reigns. So now this thing that could have been such a boon is being turned into the money machine that runs on orphan blood and I hate it.

@DemonHouser @davidgerard this. There are some AI based tools which as far as I know, ripped off no-one and didn't burn down parts of the rainforest to get there.

But the majority sucks major balls, especially genAI. There's no reason to make generic shitty looking plagiarised work from a computer LLM that's designing by committee?

Cue that old post about wanting AI to do their washing up and taxes not film and TV shows.

I do feel the tide has mostly turned, I think the shine is off AI in the public's eyes...there hasn't been an 'Avatar/Titanic' to sell it, and I think the bubble will burst shortly.

AI bros are pissing in the wind (maybe cos it's filtered out by AI? LOL)

@DemonHouser

> Already there's some incredible work being done in the Cybersecurity arena

if you mean with genAI, no, it's ~100% hype and breaks no new ground at tremendous expense

e.g. i covered some on thursday https://pivot-to-ai.com/2026/04/09/claude-mythos-the-ai-hacking-model-too-good-to-release-allegedly/

Claude Mythos: the AI hacking model too good to release! Allegedly

This week’s hype is the new model from Anthropic — Claude Mythos! It’s fine tuned for computer code. Specifically, finding security holes. Anthropic’s not releasing Mythos. It’s too powerful for th…

Pivot to AI
@DemonHouser @davidgerard what if we told you that your belief is wrong and that the LLM is a not a tool, it is a weapon designed for epistemicide?
@atax1a @DemonHouser the AI is just a *tool* for making everything stupid
@davidgerard @DemonHouser tetraethyl lead, DDT, polychlorinated biphenyls, thalidomide, cesium-137, and asbestos are also "just tools", but we don't sprinkle any of that shit on our breakfast or make childrens toys out of it
@atax1a @davidgerard @DemonHouser and unlike Anthropic's completely wholesale lying about their chatbot's 'cyber' capabilities, none of those have 'destroy the world' as the primary intended effect.
@rootwyrm @davidgerard @DemonHouser tech debt accelerant by arsonists for arsonists

@davidgerard I think tools vs genAI also muddies the waters.

Like I use AI tools - noise reduction is one where a) the wind doesn't need royalties and b) it isn't being ripped off and c) genuinely useful. My hiking videos rely on it, cos however well mic'd you are, wind is gonna wind.

But genAI, that's when I can't see many or any good uses for it. My work - writing and artwork - has been used in models - the LAION5B scraped my artwork (yes I opted out of the ones I found) and Anthropic trained off a book I contributed to. That's shitty...

And for a while I was 'well I'll get my lack of money's worth' and tried using the free versions to create stuff, but it was never that good without a lot of work...the 'presets' ripped off living artists, which I avoided as much as possible, using dead artists for inspiration. But still, not impressed.

So now I am of the view genAI - NOPE, AI tools - can be OK if not exploitative.

There needs to be a lot more transparency though re: training, might need legislation.

@davidgerard and what stung about the training of my work is I never got paid for that chapter I wrote, I never sold those artworks.

If I had gotten previously paid for that work, or got some micropayment royalty when my work appears in a result, I'd probably be less salty about it.

But the idea that my work is good enough to be stolen, but apparently has 'no value' when I say 'hey I should get paid for that' had the opposite effect from what they wanted.

I stopped painting, I stopped posting it online. There are other reasons for that, but certainly the feeling I was 'working for the man' to steal my work without getting any benefit is partly why.

This will happen more and more, artists will switch to things that can't be stolen or just stop.

Why bother?

But if they'd not been so greedy and worked out a royalty/opt out system first off, then it might have been a new renaissance. (And of course sorted out the environmental issues, bitcoin could never do that either....)

@radioclash "AI" is a marketing term, but "AI" on its own these days tends to mean the genAI.

"what about the other AI" yes but that wasn't the obvious context, was it

@davidgerard obvious context? It's not like 'GenAI' is only 3 extra letters or anything.

And yes in a textual medium with no other cues accuracy in communication rather than assumption of context is slightly important.

Ass of U and Me after all. *shrugs*

@radioclash yeah that's great thanks

@radioclash @davidgerard

Legislation!? Oh, please. The oligarchs *own* the state. They are not inclined to use it to commit class suicide. Are we?

@davidgerard *Looks at the replies and wonders if they actually read the post*
@HollieK72 they really did not
@davidgerard No, I know ...🤦‍♀️

@davidgerard

These same individuals always look for exceptions to the rule to perpetuate a status quo.

It's counternarrative malign influence.

Similar to the "not all men" narratives during #MeToo

Similar to the "white lives" narratives during #BLM

Similar to the "nice ICE" memes during #NoKings

Similar to the "Good German" newspaper stories used to discredit Holocaust history.

The Daughters of the Confederation used to play up Rockefeller's philanthropy in the era of Jim Crow.

@davidgerard That sounds like the argument that the Far Right use to justify more carbon emissions