“If you use…” there is no nuance in this take. I use LLMs to extend my creativity, learn about new things in every imaginable field, and yes, improve the quality of my apps. An LLM just quickly identified the source of a buffer overflow tonight.
https://tldr.nettime.org/@tante/114926589403081382
tante (@tante@tldr.nettime.org)

If you use #genAI you can no longer claim to "care about quality". That is just a contradiction. You actions are saying loudly that you do in fact not give a shit about anything of the sort.

tldr.nettime
Being "anti-AI" is like being against chemistry or biology. Can you do bad things with these technologies? Yes. Should you deny every possible advantage they offer? Of course not.
@danielpunkass I think the comparison doesn't really fit. The incomparable disadvantage of AI is the enormous consumption of resources, which is disproportionate to the benefit.
@phranck "The enormous consumption of resources" is again too broad. I can run useful AI on my phone, which runs on a battery that is charged every night, and can't possibly use an "enormous" amount of resources.
@danielpunkass And where comes the trained model from? How much energy, water etc. has been consumed while training? Not to mention the enormous impact on people's health by noise and infrasound that live in the vicinity of such facilities.
@phranck am I understanding that you think after the cost has been paid, we should not try to make good use of the product?

@danielpunkass @phranck sorry to inject myself

I'd love to read more fleshed-out takes on this on your blogs :)

Like Daniel mentions: the resources for training have been spent already. That changes the evaluation.

We should discuss this as two independent strands: usage and training, with different implications. That's the most convincing stance a.t.m.

I've summarized some arguments about the moral implications of using vs training AI in longer form for more nuance:
https://christiantietze.de/posts/2025/07/put-blinders-on-adverse-effects/

Can You Really Put on Blinders If You Know There May Be Adverse Effects of Your Actions?

Is it possible to claim that you’re off the hook when it comes to questions of whether usage of GenAI should be allowed, focus on practical usage, and then that’s that? I believe that is a bit too cheap a cop-out.

Christian Tietze

@danielpunkass @phranck I think you are making rather simplistic arguments on energy use, Daniel.

First, on training, your argument is similar to saying there’s no point not flying since the plane will go anyway. In aggregate, individual decisions reduce overall demand and reduce flights over time. Similarly, the LLM/“AI” boom is fed by usage, and reducing usage will reduce new training over time.

MSFT is literally trying to re-open 3-mile island to power new data centres for “AI”. There are credible estimates that AI data centres might use 10-20% of the world’s energy in 2-3 decades. It’s material.

On local models: it doesn’t make any difference whether the energy is distributed over literally billions of phones being individually charged or concentrated in a few data centres. It’s the aggregate energy use (plus, in the case of phones, charging efficiency) that matters. And desktops/laptops, for running more powerful models, use even more power. It’s fair to say that using a local LLM on your phone might only use a fraction of the energy your phone uses anyway (10%? 20% 50%? I don’t know.) But with literal billions of phone users, it’s all real energy use. I know you know this, but the fact that you charge your phone every night anyway isn’t the point. How much you need to charge it is the point.

I do almost entirely avoid Gen-AI, partly because of resource use, but more for other reasons. I see other forms of AI as much more useful, and would/do use those. (Hell, I hand-coded neural networks on and off since the late 1980s!)

@danielpunkass Don't get me wrong. I'm not complaining the existence of AI. I'm using it as a tool, too. However, we must not forget all the implications. And I don't feel that the big tech companies are very responsible with this aspwkt.
@phranck I hear you. It’s a separate criticism from what I’m talking about, though.

@danielpunkass @phranck I don’t recall the anti-LLM brigade every being vocal about the enormous waste of power (in every device’s CPU) and the insane network overheads of Google and others JS tracking, or inefficiencies related to effectively loading the same whole OS in every separate Electron app.

Much of tech is “wasteful” but at least LLMs can help with things unlike wasting huge resources just spying (sorry tracking) users internet behaviour.

@phranck @danielpunkass the history of humanity is the history of increasing energy usage and production, especially in the last three centuries

Humanity has always increased use of energy to increase productivity and value creation through new technology

Whining about AI from this angle is nonsense

@danielpunkass right? (It's not the same as being anti-web3 or anti-blockchain or anti-crypto 😝)

I think a lot of people just like complaining, and why complain about someone no one cares about when you can complain about something someone does.

@danielpunkass

A nuanced response

1. "anti-AI" is more "anti-LLM".
2. LLMs have many objective and many subjective downsides.
3. But there are also benefits
4. ... which are not guaranteed

To take your example of buffer overflow, I have seen examples of AI introducing buffer overflows.

To extend your analogy: Not every medicine is beneficial. Poisons are medicine too.

Then there is also the entire AI economic bubble, rights issues, and so on. But above all most people don't understand LLMs

@danielpunkass

To summarize: It's fine to use LLMs, but like any powertool one should read the manual first or leave it to a professional.

@DevWouter I try to remain rigidly “in control” when I work with LLMs. This is how I learn but also how I avoid it ever introducing a buffer overflow, for example. In my example it pointed right to the line of code where my human eyes and brain could confirm it (missing sentinel at the end of a static array).
@danielpunkass Chemistry and biology are not technologies though. They are scientific pursuits that seek to understand the realities around us.

@mattiem In my blog post I made an analogy to PFAS; could be simplified to: imagine you're against plastic, and there are plastic bottles that don't shed microplastics into the drink: should you not drink from plastic bottles that exist? This is a decision that's independent of not manufacturing more bottles.

Does that make sense to you?

@ctietze Honestly, I’m not sure 😀

But I do think the non-analogy question is: a thing has uses! But also comes with downsides/risks/misuses. Can we rationally decide to avoid the thing, even though it has (potentially huge) utility? Yes! Up to and including regulation of the thing to the point of making it entirely illegal.

@mattiem FWIW I'm not pro-gun but I'm a fan of having a pocket knife to cut things
@ctietze Right excellent analogy, and one I think about often.
@mattiem I'm not sure I gather what exactly you mean with 'can we rationally decide' -- are you musing about modal logic, about possibility?
@ctietze I was trying to make a distinction. If you are a sociopath, literally anything that helps you achieve your goals is good. Utility can be at odds with morality. In fact, I think it’s very common.

@mattiem Yes, of course that's a given, otherwise we wouldn't have to talk about this.

If we agree we both at least pretend to not be sociopaths, does that change how you want to reply?

@ctietze Haha ok sure, I’m happy to delete the “rationally”
@mattiem ok then what do you mean with "can we decide to...?" is that a plea? Because if you ask whether it's merely possible, well yes why not, but that doesn't sounds like an interesting question to you it feels like

@ctietze Maybe I just did a bad job of wording. Let me try again! Way simpler version:

Can we say no to a useful thing?

@mattiem As long as we assume free will etc. of course we can.

Should we? That's the moral question :)

@mattiem @ctietze people do it all the time. Loads of people say no to cars. Are they personally convenient and useful? Yes, of course! Are they generally detrimental to yourself and the community you live in and even society as a whole? Arguably, yes! I view LLM’s similarly because I personally believe the harms vastly outweigh the benefits. But as a society people would need to choose. Based on history, I’m not optimistic.
@mpospese @ctietze Yeah in general if something is useful *to you* but has downsides *to others*, humans will still do it until laws are involved.
@mattiem @mpospese @ctietze "Laws" are what it means to say "as a society" we do things. We say no to tons and tons of useful things. Marijuana is useful for many conditions, we banned it for decades. Child labor is also "useful." We ban it. CFCs were useful. We banned them. Pseudoephedrine is incredibly useful, but we've regulated it to the point where you can barely get it anymore. We ban useful things all the time. Sometimes for good reasons, and sometimes not. But certainly we can and do.

@mattiem @mpospese @ctietze I don't think we should ban AI and I'm glad we didn't ban internal combustion engines when they were first invented. I think cars have done a lot of damage. Gasoline engines (and even cars) have also created enormous and broad benefits for many, rich and poor, throughout the world. I absolutely would choose today to live rather than the eighteenth century.

But there are big lessons we should learn from our mistakes with cars, too.

@mattiem @danielpunkass but when chemistry was a technology, for the short leriod after invention of radiation, that did not end well 🥹
@krzyzanowskim @danielpunkass What do you mean?
@mattiem @danielpunkass In the early 20th century, a "radium hype" swept across the world, fueled by the discovery of radium's luminescence and perceived health benefits
@krzyzanowskim @danielpunkass Ahh right! Kind of similar to what happened with lead. It’s a very useful material, it just turns out that the downsides are huge.
@mattiem @krzyzanowskim @danielpunkass much worse than lead, actually. LLMs are a great analogy—radium was shoved into all sorts of places where it added no value. For instance, https://www.orau.org/health-physics-museum/collection/radioactive-quack-cures/pills-potions-and-other-miscellany/vita-radium-suppositories.html
Vita Radium Suppositories (ca.1930)

Museum of Radiation and Radioactivity
@mattiem See, now this is what I call nuance!!! Have to think on it… but … whatever point I’m trying to make doesn’t hinge on LLMs being either a technology or a science. It hinges on having both beneficial and harmful uses.
@danielpunkass @mattiem what about energy use, the effect on the environment, the building of huge data centers and possibly privately owned nuclear power plants, the amount of water spent to cool that stuff when these resources are increasingly sparse. Are there any benefits there?

@danielpunkass Yes right of course!

Like all good analogies, they break down if you look too closely.

@mattiem @danielpunkass

ML is literally applied mathematics. Therefore into the bin it goes together with biology, chemistry, physics.

Yes you can create harm: fake news, bio weapons, chemical weapons, atomic bomb.

Seems like ML math is least lethal of this bunch. 🎉

@mrudokas @danielpunkass ML will play a role in all physical weapons humanity builds from this point forward.
@danielpunkass Basically "because PFAS exist we should stop doing chemistry"

@danielpunkass I think it’s tricky. There is, no doubt, utility in ML/AI (and has been for years) but so much of the current explosion is a nasty mix of theft,waste, environmental hazard, danger (hallucinations etc.), slop and weapons against the already systemically victimized.

So much horrible crap is happening in the AI-O-Sphere that one could argue, and I do, that the bubble needs to pop and take out a lot of bad actors in the process before it’s ’safe’ to do any advocacy.