I'm glad somebody out there is brave enough to push back against the "personal ChatGPT usage is terrible for the environment" message https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about

"If you want to prompt ChatGPT 40 times, you can just stop your shower 1 second early."

"If I choose not to take a flight to Europe, I save 3,500,000 ChatGPT searches. this is like stopping more than 7 people from searching ChatGPT for their entire lives."

Using ChatGPT is not bad for the environment - a cheat sheet

The numbers clearly show this is a pointless distraction for the climate movement

Andy Masley
@simon I saw a similar analysis and I walked away realizing how much my nightly background YouTube and Netflix have a much, much greater environmental impact than I realized. I'm not quitting them though either.
@simon looks like it was the same author, here was there previous that I was linking to people who asked https://andymasley.substack.com/p/individual-ai-use-is-not-bad-for
Using ChatGPT is not bad for the environment

And a plea to think seriously about climate change without getting distracted

Andy Masley

@webology @simon

This is a recasting of the "it's ridiculous to ban private jets - they are only a small percentage of CO2 emissions" argument.

We consume prodigious amounts of the Earth's resources. We need to stop encouraging even more consumption.

If someone needs to loose weight to get healthy, do we offer a dessert mint after the burger they already eat?

Be ashamed of bad habits; don't use them to justify new ones.

That's why I got rid of my car, stopped flying, use green energy, etc.

@dalke @webology comparing private jets to LLMs makes no sense to me

A private jet burns enormous quantities of CO2 to transport just a few people

An LLM serves millions of people

@simon @dalke exactly that.

Just a small note, though: using weight or eating habits as a comparison can unintentionally come across as fat-shaming. We can discuss better choices for the planet without making people feel bad about their bodies. (I'm going to assume you meant well here.)

Outside of the drive-by reply, I didn't realize how much online streaming's environmental footprint was.

@webology @simon @dalke

"Unintentionally" fat shaming?
I'm pretty sure it is absolutely intentional.

@webology @simon

The flip side is when people who need to lose weight for health reasons get shamed as fat-phobic.

(Eg, https://medium.com/in-fitness-and-in-health/shamed-as-fatphobic-for-pursuing-better-health-d530ad30b1d behind a Medium wall but available elsewhere.)

On the other hand, I want to make flyskam - "flight shame" - real and widespread.

Discussing things nicely hasn't worked during the last 35 years.

Re: online streaming, I wonder how much of that is from the media transfer, and how much for the ad system (user tracking, customized bids, etc.)

Shamed as “Fatphobic” for Pursuing Better Health - In Fitness And In Health - Medium

In my journey to improve my health and wellness, I’ve gotten lost within the dark side of the fat acceptance, body positivity, and Health At Every Size (HAES) communities. Hi, I’m Holly, and I’m a…

In Fitness And In Health

@simon @webology

My point is to look at one's entire CO2/pollution budget, not make relative comparisons.

Dickens: "Annual income £20, annual expenditure £19/6, result happiness. Annual income £20 pounds, annual expenditure £20/0/6, result misery."

Collectively exceeding our budget will bring misery.

Two flights per year might be within the CO2 budget, in which case - happiness!

OTOH, two flights + ChatGPT, might bust the budget - misery.

@simon It's almost hard to believe "green" distractions like this aren't intended to prevent more effective actions (e.g. spending less money).

Also worth noting that Google Gemini is probably 80% more efficient: https://venturebeat.com/ai/the-new-ai-calculus-googles-80-cost-edge-vs-openais-ecosystem/

The new AI calculus: Google’s 80% cost edge vs. OpenAI’s ecosystem

Explore the Google vs OpenAI AI ecosystem battle post-o3. Deep dive into Google's huge cost advantage (TPU vs GPU), agent strategies & model risks for enterprise

VentureBeat
@simon
You miss one important thing, your shower has a purpose and gives you a valuable result…
Using ChatGPT is not bad for the environment - a cheat sheet

The numbers clearly show this is a pointless distraction for the climate movement

Andy Masley
@simon @Ulli no it isn’t, it’s waved away. It admit it could be useless and then says even if it is useless that’s ok because so are many other things we like. I can trust my shower, I can’t trust what they are giving us. If my shower changed to hydrochloric acid once I’d drop and never look back.

@passwordsarehard4 @Ulli you are making a slightly different argument there

The piece argues that it's OK spending minimal energy on things that are useless (which the author and myself both believe or to be the case for LLMs)

It looks to me like you are arguing against spending energy on things that are actively harmful

@passwordsarehard4 @Ulli and yes, if there were no way to use LLMs that did not actively harm the user I would support discouraging their use of even outright banning them, independently of their energy usage

I do not believe it is the case that all uses of LLMs actively harm their users - I think they require thoughtful application and we need to work hard to help people understand their many limitations and flaws

@simon @passwordsarehard4
Of course!
Everybody things he could be smarten then anybody else while he is using a LLM for things, a LLM is not made for…
🤣
@simon
Yes, it „is very well constructed" and simply wrong in so many ways.
I just take the numbers from the Article, if his LED´s are wasting Energy for 0,40$ a month, and depending where he lives that would mean he is using them very very rarely, it would be about 3.200.000.000$/Month, if all of the at least 8 Billion people in the world would do the same!
1/
@simon
Every Month!!
With the average price per kWh in the US, this would be an amount of 21,3 Twh/Month. That´s about the produced energy of all US Nuclear Power Plants for 10 Days, just for the usage of this small LED, if everybody would do it!
2/
@simon
Additionally it is not a game of „I do this or that“, it is about protecting the environment by all people.
You can´t argue that you could waste a specific amount of energy because someone else is doing the same.
That is not how it works!
3/
@simon
Additionally, the author is completely missing that he is not paying ChatGPT with the 20$ he is sending them, but with his personal and/or (confidental!?) business data he is feeding into the system!
4/
@simon
The owner of Perplexity said in a recent interview, that he is about to release a new Browser, not to help people, but to get more and better data about the users.
The 20$ are only there, to make people think they are paying for the service with their money, and to get more personal information about them through the payment process.
5/
@simon
It is enough money to convince people that they are paying for a service, and that they are not the payment theirself, and it is less enough so that only few people would not use the service because of the price.
6/
@simon
And after all, the results you get from a LLM are pure luck!
Those systems did not even know that they should answer questions, or solve problems, nor are they „thinking“!
They are just placing one letter after the other, without any knowledge about the context.
7/
@simon
There are studies for example with Perplexity (using ChatGPT) and controlled texts as data input, and a Failure rate of 93%! 93%!!!!
Even if you would simply guess an answer, or ask an 8-Ball, you would get a better and more reliable result than that.
So the User is either forced, to verify the answer, EVERY ANSWER, or he is getting simply wrong results, without even knowing it!
8/
@simon
That is completely worthless, and just adds more stupidity into the world!
And the worst you could do is using LLMs for coding!
You will have a really hard time, if your code becomes buggy, you have to find the problem, but you have no idea why the code is like it is, because it is not written, and developed by yourself.
There is almost nothing harder, than to debug some others code...
Sorry!
9/End

@Ulli one of the most important skills for making effective use of LLMs for coding assurance is being *really good* at code review

Engineers with great code review skills (and who don't try to avoid reading other people's code because they prefer to write their own) can get a whole lot more value out of LLMs

@simon Instead of prompting ChatGPT 40 times, you may also do 4000 web searches with a standard web search engine (say, Google early 2000s,). It's a matter of perspecttives and (false) equivalences.
(edited, because I exaggerated orders of magnitude)
@djoerd @simon Agree with the spirit of this comment. 1 chatbot prompt saves me 10+ Google/Bing/DuckDuckGo searches and provides MUCH better results MUCH quicker.
@rgbenderkc @simon I was not talking about saving your personal energy 😉
@djoerd @rgbenderkc @simon but surely personal energy is the thing we should be optimizing? Our lives are only so long, we as a society should be doing what we can to make computers cheaper/better/faster so I can spend my life doing what I love instead of writing boring code
@simon I admit I had to adjust a few assumptions in my head. Useful.

@simon Using it promotes AI as a viable business model as a whole as well as (in the case of non-local models) provides usage data and training content (and money if you pay) for the companies to do further bad AI business with.

So of course, while the direct impact of one query on the environment doesn't really matter, the economical impact of you using it does indeed have an influence on the environment!

The goal has to be to make AI and its connected surveillance capitalism a non-viable business model, and refraining to use it and shunning everyone who does is a way to do that!

@simon
Give me an approach which:

  • Does not waste massive amounts of energy (and water) for training
  • Does not require expensive rare materials to run
  • Doesn't spit out wrong crap
  • Can actually run locally in a private environment
  • Has a reproduceable thought process
  • Is free (as in libre) to be used by everyone

and maybe then we can talk about actually deploying this in the real world. Until then they should stop their venture-capital induced hype bubble fueled by ex-crypto-bros und get back the the drawing board. I'll take human work over an AI every day until that happens...

@the_moep we have some of those today:

- Does not waste massive amounts of energy (and water) for training: DeepSeek v3, still one of the strongest models - trained for just $6m, way less than the biggest USA models
- Does not require expensive rare materials to run: can't help with that if it rules out laptops
- Doesn't spit out wrong crap: yeah that's still unsolved! Newer models are "better", but honestly an AI tool that never makes a mistake feels like it will always be science fiction me

@the_moep

- Can actually run locally in a private environment: yes! We have that now. The models I can run on my laptop got really good starting from about six months ago
- Is free (as in libre) to be used by everyone: we are there too. The Qwen models are under an Apache 2 open source license. Plenty of other good models are "open weights" which is almost good enough to allow "free to be used by everyone"

@simon
Pretty much all of that does not hold true in my opinion:

DeepSeek v3, still one of the strongest models - trained for just $6m, way less than the biggest USA models

That's money, not power or resources. Any monetary cost claimed by a Chinese company can't be compared to actual free countries as a way larger part of that cost is offset by reduced environmental and worker protection (and partial slavery), so totalitarian in general. Also they allegedly based their work on OpenAI so you might have to add their costs on top too...

Does not require expensive rare materials to run: can't help with that if it rules out laptops

It's about their requirement of specialized hardware to train (while the models might run on "normal" CPU nowadays, they cannot be trained on a cheap phone or laptop. A normal program can be created there no problem.

Can actually run locally in a private environment: yes! We have that now. The models I can run on my laptop got really good starting from about six months ago

They really aren't. They are slow and can't handle actual reasoning or even remembering things like the "big" models can. (And of course they are anything but intelligent.)

But it's all just a word prediction system anyways I guess it now just predicts more words at a basically cost linear to how much words you input and want to have predicted, so with the current approach a local machine will always be behind what one can do in a huge data-center hence why I want a different approach that aren't just llm and inference-based.

Is free (as in libre) to be used by everyone: we are there too. The Qwen models are under an Apache 2 open source license. Plenty of other good models are "open weights" which is almost good enough to allow "free to be used by everyone"

All they release is the finished model (and in the case of Qwen their weights) which of course is nice, but it does not allow reproducing or even forking their work. As long as they do not release the code which made the model and the training data under a free license it imo. cannot be considered free.

Them licensing it under any kind of free software license might actually not be valid as it's not based on work that was available under a free license. I would even go as far and say that most models are released illegally as they are derivatives of copyrighted works. Them sticking a free software license on it does not magically safe them from the copyright the material they used is under.

A good comparison in the classical software development work is the CraftBukkit project which used Mojang's Copyrighted Minecraft code and got taken down not by Mojang/Microsoft but by a contributor because their approach violated the GPLv3 license, most "open" models run into the same issue

@the_moep honestly those are all great rebuttals, I don't have a good counter-argument to any of them

@the_moep shunning people who use LLMs because they are being environmentally irresponsible feels dishonest to me

There are plenty of credible arguments against irresponsible usage of LLMs, I don't like seeing people waste their time on the ones that are least credible

@simon Well while I agree that one should use the correct arguments for criticizing something I also don't think one should completely discard the personal environmental-impact argument when it comes to usage of AI.

I see the issue with that argument more in that the real cost of AI-queries are hidden from the user (and independent researchers), it's not the energy spent on the single query but the amount of resources spent to create the model and to further the development of the current approach. (Which includes discarding existing hardware or potentially even models)

Of course that is way harder to calculate but I believe that you would get closer to other things that are now generally accepted as being bad for the environment like short-range flights.

@the_moep I somewhat agree, but you can't effectively state your case with "AI"; it is too broad a term, and there is some AI that is genuinely useful and efficient.

@tasket You are of course right, sorry. It seems that I too fell victim to all the "AI" hype talk, but I feel the actual meaning of that term is now (unfortunately) generally accepted as meaning "generative AI, especially LLMs".

I do not have such a big issue with actual useful "AI" doing stuff like voice recognition or protein folding.

@simon actually it's not that this is good news. It's the aggregate value that is impressive. As it is 1 second of shower, when you multiply it by 7 billions. However i do recognize that there are considerations to be made. One of those is that roughly 1/7 of population today probably did not have even the chance for a shower. So, abruptly, a 1-second shower takes a whole new dimension, and 10 ChatGPT queries for each of the other 6 billions people mean a full 1 minute shower for the remaining 1. Probably this could prevent a few millions deaths by infection each year.

There are dozens of ways each one of us wastes resources that mean life or death to others, and I think no one with this awareness is happy when wasting. Simply we didn't need ChatGPT to be another one, as we didn't need any crypto or any metaverse.

Defending it and setting the stage for confrontation is, in the best frame I can think of, love for numbers and I can accept it. But however little, to me is however too much.

@simon I'm wondering, do those numbers take into account the training power usage and all the failing attempts to produce a viable model? Genuinely asking

@Xzan the training cost estimates are covered here https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about#§training-an-ai-model-uses-too-much-energy (estimated 10% increase in cost-per prompt after amortization)

I've not seen anyone ever account for the cost of failed runs, which is frustrating

Using ChatGPT is not bad for the environment - a cheat sheet

The numbers clearly show this is a pointless distraction for the climate movement

Andy Masley
@simon this doesn't take I to account the model training hardware cost? Much like counting the petrol used, but not the environmental cost of making the car, so a bit flawed.
@einonm that's included in the estimate here: https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about#§training-an-ai-model-uses-too-much-energy - as a 10% increase in cost-per-prompt after amortization over all of the usage
Using ChatGPT is not bad for the environment - a cheat sheet

The numbers clearly show this is a pointless distraction for the climate movement

Andy Masley
@simon that's the energy cost to run the hardware, not the hardware cost itself?
@einonm you mean the cost to build the data centers? I haven't seen numbers on that - I wonder if that's a meaningful share of overall energy usage per-prompt or not, like is it a 5% increase or does it 2x or 3x the cost?
@simon ISTR some stat of building a new EV costs 10 years of running the petrol car that it replaces, so potentially more than 2 or 3 times
@einonm I would guess that stat is heavily influenced by the enormous energy cost involved in producing batteries for EVs
@simon it also highlights that buying a new EV isn't necessarily good for the environment. More consumerism is rarely the answer.
@simon I'm also going to add that the author looks to be very much involved with the billionaire type through the 'Effective Altruism' philosophy, that doesn't look great for his credibility in pushing these arguments.
@simon counterpoint: LLMs fucking sucks and we should convince people not to use it by any means necessary
@trevorthetuba I think it's unethical to shame people into not using a technology by lying to them about its environmental impact, instead of spending time explaining all of the problems with that tech that are genuinely true
@simon @trevorthetuba “I’m going to write them a strong letter. With 8 strong questions.” That’s kind of what this appeal sounds like.

@trevorthetuba @simon I got tired of all the AI slop in a Github repo (the issues text, luckily others go through that particular code as it is JS/TS) and recommended this to my colleagues: https://gptzero.me/

It detects that the author is an LLM even if the LLM is told to deliberately include spelling and grammar mistakes. Must be based on the usage of big words (who says "paramount" IRL) and low information density.

I do not need that tool, I can detect AI slop just by how tiresome it is to read, I've only used it 10 or so times (for both LLM and human written texts) to confirm my hunches.

AI Detector - Most Accurate AI Checker for ChatGPT & Gemini

The World's #1 AI Content Detector with over 8 Million Users

GPTZero

@simon I think a big problem with AI is that it's being forced upon us at a time when our polution should be decreasing, and should have been for many years.

And now, instead of it, we've figured out a new way to burn fossil fuels for something we never needed in the first place. (Or at best, marginally needed, I mean, we already had artists and notulists etc).

So I see this argument (of consumption by LLM) as a generic system critique. Just like we need to get rid of our plastic dependency, but if you measure it per person per packaged piece of candy: is super small.

@simon "If I choose not to take a flight to Europe, I save 3,500,000 ChatGPT searches" Similar arguments were made for Bitcoin - distracting with other emissions. Even if you are saying that it is small compared to some other emissions, those emissions are still there. This is just another source that adds to the problem.

@platlas the environmental impact of Bitcoin feels different to me because of the way that system is designed: as a competition

To mine new Bitcoin you don't just have to burn energy: you have to burn MORE energy than anyone else

It's impossible to make that system more efficient, because any efficiencies will be lost to the increased competition they cause between rival miners

@simon Yes that is extreme example. I shouldn't mentioned it.