https://tldr.nettime.org/@tante/114926589403081382
@danielpunkass @phranck sorry to inject myself
I'd love to read more fleshed-out takes on this on your blogs :)
Like Daniel mentions: the resources for training have been spent already. That changes the evaluation.
We should discuss this as two independent strands: usage and training, with different implications. That's the most convincing stance a.t.m.
I've summarized some arguments about the moral implications of using vs training AI in longer form for more nuance:
https://christiantietze.de/posts/2025/07/put-blinders-on-adverse-effects/
Is it possible to claim that you’re off the hook when it comes to questions of whether usage of GenAI should be allowed, focus on practical usage, and then that’s that? I believe that is a bit too cheap a cop-out.
@danielpunkass @phranck I think you are making rather simplistic arguments on energy use, Daniel.
First, on training, your argument is similar to saying there’s no point not flying since the plane will go anyway. In aggregate, individual decisions reduce overall demand and reduce flights over time. Similarly, the LLM/“AI” boom is fed by usage, and reducing usage will reduce new training over time.
MSFT is literally trying to re-open 3-mile island to power new data centres for “AI”. There are credible estimates that AI data centres might use 10-20% of the world’s energy in 2-3 decades. It’s material.
On local models: it doesn’t make any difference whether the energy is distributed over literally billions of phones being individually charged or concentrated in a few data centres. It’s the aggregate energy use (plus, in the case of phones, charging efficiency) that matters. And desktops/laptops, for running more powerful models, use even more power. It’s fair to say that using a local LLM on your phone might only use a fraction of the energy your phone uses anyway (10%? 20% 50%? I don’t know.) But with literal billions of phone users, it’s all real energy use. I know you know this, but the fact that you charge your phone every night anyway isn’t the point. How much you need to charge it is the point.
I do almost entirely avoid Gen-AI, partly because of resource use, but more for other reasons. I see other forms of AI as much more useful, and would/do use those. (Hell, I hand-coded neural networks on and off since the late 1980s!)
@danielpunkass @phranck I don’t recall the anti-LLM brigade every being vocal about the enormous waste of power (in every device’s CPU) and the insane network overheads of Google and others JS tracking, or inefficiencies related to effectively loading the same whole OS in every separate Electron app.
Much of tech is “wasteful” but at least LLMs can help with things unlike wasting huge resources just spying (sorry tracking) users internet behaviour.
@phranck @danielpunkass the history of humanity is the history of increasing energy usage and production, especially in the last three centuries
Humanity has always increased use of energy to increase productivity and value creation through new technology
Whining about AI from this angle is nonsense
@danielpunkass right? (It's not the same as being anti-web3 or anti-blockchain or anti-crypto 😝)
I think a lot of people just like complaining, and why complain about someone no one cares about when you can complain about something someone does.
A nuanced response
1. "anti-AI" is more "anti-LLM".
2. LLMs have many objective and many subjective downsides.
3. But there are also benefits
4. ... which are not guaranteed
To take your example of buffer overflow, I have seen examples of AI introducing buffer overflows.
To extend your analogy: Not every medicine is beneficial. Poisons are medicine too.
Then there is also the entire AI economic bubble, rights issues, and so on. But above all most people don't understand LLMs
To summarize: It's fine to use LLMs, but like any powertool one should read the manual first or leave it to a professional.
Same here.
@mattiem In my blog post I made an analogy to PFAS; could be simplified to: imagine you're against plastic, and there are plastic bottles that don't shed microplastics into the drink: should you not drink from plastic bottles that exist? This is a decision that's independent of not manufacturing more bottles.
Does that make sense to you?
@ctietze Honestly, I’m not sure 😀
But I do think the non-analogy question is: a thing has uses! But also comes with downsides/risks/misuses. Can we rationally decide to avoid the thing, even though it has (potentially huge) utility? Yes! Up to and including regulation of the thing to the point of making it entirely illegal.
@mattiem Yes, of course that's a given, otherwise we wouldn't have to talk about this.
If we agree we both at least pretend to not be sociopaths, does that change how you want to reply?
@ctietze Maybe I just did a bad job of wording. Let me try again! Way simpler version:
Can we say no to a useful thing?
@mattiem As long as we assume free will etc. of course we can.
Should we? That's the moral question :)
@mattiem @mpospese @ctietze I don't think we should ban AI and I'm glad we didn't ban internal combustion engines when they were first invented. I think cars have done a lot of damage. Gasoline engines (and even cars) have also created enormous and broad benefits for many, rich and poor, throughout the world. I absolutely would choose today to live rather than the eighteenth century.
But there are big lessons we should learn from our mistakes with cars, too.
@danielpunkass Yes right of course!
Like all good analogies, they break down if you look too closely.
ML is literally applied mathematics. Therefore into the bin it goes together with biology, chemistry, physics.
Yes you can create harm: fake news, bio weapons, chemical weapons, atomic bomb.
Seems like ML math is least lethal of this bunch. 🎉
@danielpunkass I think it’s tricky. There is, no doubt, utility in ML/AI (and has been for years) but so much of the current explosion is a nasty mix of theft,waste, environmental hazard, danger (hallucinations etc.), slop and weapons against the already systemically victimized.
So much horrible crap is happening in the AI-O-Sphere that one could argue, and I do, that the bubble needs to pop and take out a lot of bad actors in the process before it’s ’safe’ to do any advocacy.