I offer Cassandra's Complete Class Theorem¹.

All "good use cases for AI" break down into one of four categories:

I. Bad use case.
II. Good use case, but not something AI can do.
III Good use case, but it's already been done without AI.
IV. Not AI in the LLMs and GANs sense, but in the sense of machine learning, statistical inference, or other similarly valid things AI boosters use to shield from critique.

https://wandering.shop/@xgranade/115766140296237983

___
¹Not actually a theorem.

I reserve the right to, if you're acting like an AI booster within my social media light cone, point out which of these four categories your One Good Use Case falls into.

I had to add Category IV in my self quoot to cover that boosters kept showing up and claiming that things that have been done for years and years are evidence of why LLMs and image-generation GANs are useful. You know, the same ol' bait and switch I pointed out the other day.

https://wandering.shop/@xgranade/115760265788529588

@xgranade

I like how you logically put it. For me, it comes down to "Does the technology do more good than bad and is it a net positive for society and/or the environment?" I honestly do not believe what is being marketed to us as "ai" does enough good to justify its existence at this time, outside of a research center. It might be useful some day, but people are literally dying because of how it is being sold. The world is on fire and they are only adding more fuel by building data centers.

This is a very good point, and something I need to keep at the front of my mind. Even if there was somehow a magic use case for LLMs and other AI bullshit machines, that would not justify:

• Making fascists rich.
• Displacing labor rights.
• Fucking over the environment.
• Enclosing culture behind corporate ownership.
• Giving fascists a giant disinfo machine.

https://toot.cat/@zkat/115773741492078349

Kat Marchán 🐈 (@zkat@toot.cat)

@xgranade@wandering.shop the drum I’m banging is that there is no use case that could be valid enough, even if you were wrong, to justify its harms, so this conversation is ultimately irrelevant

Toot.Cat
So like, yeah, I'm gonna shitpost about how laughably bad the pro-AI position is on the technical merits, but the inhumanity of the pro-AI position on a moral basis is no laughing matter.
@xgranade all things AI (specifically genai and llms) are never good, even when they seem so, there is always an angle that is so bad that I cannot in good faith use it.

@f4grx @xgranade

The "Thermodynamics of Bullshit"

Environmental: Every "funny" AI-generated image of a cat in a tuxedo is a literal withdrawal from the planet’s cooling systems. It is Carbon-to-Cringe pipeline engineering.

Labor: AI isn't "automating" work; it is laundering it. It takes the collective output of human culture, strips the names of the creators, and sells it back to them in a distorted slurry.

The Fascist Subsidy: every token generated effectively transfers wealth to the massive corporate/fascist structures that own the compute.

Enclosure of Culture:GenAI is just a giant "copy-paste" machine built to enclose human creativity behind a subscription wall.

Environmental Nihilism: Every "low-effort" AI image is a glass of water poured into a server's cooling system while the planet burns.

[rude laughter]

@xgranade I appreciate you articulating all this, very much mirrors my views. I'll be here making good use of the favorite button.
@xgranade III e.g. would be "good autocomplete in a source code editor".

Sadly LLM tech has been used as an excuse to not invest in proper indexing and editor features. Which is definitely possible, as it has existed before.
@divVerent @xgranade it's terrible for auto complete, even the most ai loving users I know turn that off after a couple of days.
@hashbangperl @xgranade No, no, it autocompletes 20 lines of parallel processing framework boilerplate and complicated function prototypes when defining members in a derived class.

But also... I could just use Go and not C++, and in Go (Apache Beam) I can just use a lambda or a bare function, no class boilerplate needed. Which is why I have this problem only at work, not at home.

The proper work solution for this would not be a LLM but a centralized repository of canned code templates to insert.

Or, a better framework and/or better language.
@divVerent @xgranade @hashbangperl You have correctly identified that the problem is not the programming language per se, but rather its use.
All programming languages offer us the tools of abstraction, and using these we can work at a higher level and avoid boilerplate and other details that are not relevant to our current level of working abstraction.
This is absolutely the whole point of programming languages, and their power.
@sleepyfox @xgranade @hashbangperl All languages are useful, until someone writes a coding style for them.

Go is relatively new so it has not been boilerplated to death yet. But it sure will happen as it always does.

And then LLMs come in and help work around the issues people in suits caused.

Because you know how it goes: one bug, and someone cries "it would not have happened if we had a bit more bureaucracy"...

@xgranade

Theorem¹: wouldn’t that make it Cassandra’s Complete Class Conjecture?

@xgranade the drum I’m banging is that there is no use case that could be valid enough, even if you were wrong, to justify its harms, so this conversation is ultimately irrelevant

@zkat That's fair, and I definitely bang on that drum as well, but I'm also not above shitposting at the special pleading from the slopbros.

Perhaps I should be...

@zkat @xgranade I prefer not to deal in binaries, all one, all zero, nothing else exists. If I wanted that, I'd get another computer.
@codinghorror @zkat That's a fine instinct when approaching people acting in good faith, but is very badly misplaced when it comes to AI. I can't speak for @zkat, but as for myself, I am perfectly content to draw a pretty bright damned line between "people advocating or making excuses for anti-human AI bullshit" and "people who I'm willing to extend the assumption of good faith to."

@codinghorror @zkat Maybe put differently, I don't see any excluded middle here worth saving, and for the same reason I don't see an excluded middle between "opposes fascism" and "is fascist."

The drive to always seek out an acceptable moderate position is very exploitable, and that is something AI boosters are very wont to do.

@codinghorror
What you're responding to _is_ the nuanced position; it's saying that, even if it _could be_ conceded that genAI uses were actually practical and useful literally all of the time with zero review required, it would still not be worth the ecological, economical, and ethical, impacts (among many others) that were incurred in its development and maintenance. It is asserting that there is a trade-off (undeniable, imo) and saying the trade-off is still a bad one; maybe not necessarily for task at hand, but for society at large, like having a thresher fueled by human blood.
@zkat @xgranade

@zkat @xgranade

The problem here is the IV cases. For example, I’d be hard pressed to find harms caused by Mozilla’s language-translation models:

  • The models are small.
  • They have done a lot of work to make sure you can reproduce the training on a single moderately-powerful machine.
  • The training data is public and curated for the specific purpose of training machine translation systems, not harvested from a load of sources without permission.

And, in the current marketing environment, things like that are also branded ‘AI’.

When the models can be trained ethically, machine learning typically does well in places where there is no harm in a wrong answer but significant value in a correct one. CPU branch predictors now often use neural networks. If they give a wrong answer, the CPU does a small amount of work that it throws away. If they give a correct answer, the CPU does useful work instead of idling. Getting the right answer 95% of the time gives a 10x or better speed up relative to not doing it. This is a great place to use machine learning. But a lot of the places where it’s proposed have significant negative real-world consequences from wrong answers.

@david_chisnall @zkat @xgranade That is the point of IV and exactly the example I've seen employed by Mozilla employees here. Someone complains about Firefox adding chatbots and whatever ai window nonsense they're talking about doing, and instead of justifying those features they move the conversation to "but you like translations right, checkmate".
@david_chisnall @zkat @xgranade The actual translators beg to differ: https://linuxiac.com/ai-controversy-forces-end-of-mozilla-japanese-sumo-community/

That said, it could technically be implemented correctly. Hard to do in current marketing environment pushing artificial idiocy everywhere under high pressure
@david_chisnall @zkat @xgranade Regarding the branch predictors: you could avoid them completely if OS vendors did not pretend they are writing code for PDP11, and CPU vendors did not have to cover up that their CPUs have a pipeline.

Branch delay slots are well known technology, and compilers are completely capable of loop unrolling and filling the slots with instructions that are useful.

Exposing the fact that the CPU has a pipeline avoids the need for statistical branch predictor as well as it avoids multiple forms of Spectre vulnerabilities.

There, solid solution without use of AI. Unfortunately, for reasons outlined here
https://archive.org/details/lca2020-What_UNIX_Cost_Us solving problems is not viable, only papering over them is. Reasons boil down basically to entrenched industry inertia. ​
@zkat @xgranade
Branch delay slots are a good thing? 🤯
There are much better solutions than that (ignoring x86 madness, of course) unless you have an extremely constrained silicon budget.
@xgranade always having to explain IV, it really confuses the issue. Yes there's also some plain old big statistics that people are calling AI now (to boost their cvs and publications, among other things), those work pretty good...
@xgranade yes, ye golden days of statistical machine translation
@zhksh @xgranade Google Translate has peaked and is declining in quality but the peak was very much after they stopped using statistical machine translation.
@xgranade CycleGANs might be usefull. A colleague of mine used it to generate MRI and CT images from a digital phantom so we had data to train and evaluate multimodal image registration.
@xgranade This is more a very niche case and not a counter example for your theory
@oxi @xgranade Also GANs are usually much less resource intensive than the stuff getting hyped these days.

@oxi There's a reason you don't train classifiers on synthetic data when you don't understand how the simulator works.

Based on having done a bunch of ML shit back in the day, I'm gonna guess that's a Category I.

@xgranade If you only use synthetic data a trained net can only register the synthetic data. if you fine tune the trained net with real images the results are better than training only using the real data (<100 image pairs)
@xgranade layperson here: is automatic translation (deepl etc) case iv?
@niedlichenacktschnecke @xgranade Kind of a mix of ii and iv: in that there's nothing that does it super well and the best options use transformer models but much smaller ones than what's getting hyped these days.

@xgranade I have used one at work (we were told to). The positive use I found simply saved me some time - I could have done the work, but decided to let the AI do it, so I could be a Good Little Employee.

Another time we tried, and it didn't solve the problem at all.

I have so far been unimpressed. Anything I have found of use I should have been able to find using a decent search, but of course, these have all been shittified with AI, so no longer do their job properly.

@xgranade weather forecasting, using AI to infer areas that can't be measured using contextual and historical data. NOAA does this, and it's great. Saves lives. Lots of applications like that.

LLMs are pure trash.

@xgranade I guess my only quibble is that cat 4 is wider than people realize and actually quite important.

@xgranade okay. now make marx sing worker's songs, I'll wait.

That said, I'm absolutely against 99% of the western AI hype.

As a marxist, I know that a tool is not to be thrown out because its main wielders are fascists.

We shot fascists with their own guns. We use fascist social media to recruit new people.

@xgranade
My gut is telling me that the AI equivalent of all the fiber that dropped during the dotcom boom will be smaller companies that build out cloud-based ML services using the old HW.
@xgranade a non-trivial chunk of my career has been in group IV, and the equating of LLMs and GANs to AI has been maddening.

@xgranade
I am tempted to simplify this by saying that the great use cases for AI are exactly the ones which existed before the current hype of LLMs and Generative AI, when the field was in the hands of academics and researchers, not controlled by investors and the worse of the worse of the VCs.

Things ranging from contextual translation to image recognition to optimization to genetic algorithms and evolutionary computing. When running fast and fewer resources were the goals.

@xgranade

Category V: cryptids

@trochee @xgranade i bet Mothman could find more bugs than an LLM.

@cthos @xgranade

I was gonna say "cat v: unicorns" but the *-bros already believe (a) they're real (b) they've caught one

So "cryptids" was funnier