I'm glad somebody out there is brave enough to push back against the "personal ChatGPT usage is terrible for the environment" message https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about

"If you want to prompt ChatGPT 40 times, you can just stop your shower 1 second early."

"If I choose not to take a flight to Europe, I save 3,500,000 ChatGPT searches. this is like stopping more than 7 people from searching ChatGPT for their entire lives."

Using ChatGPT is not bad for the environment - a cheat sheet

The numbers clearly show this is a pointless distraction for the climate movement

Andy Masley
@simon
You miss one important thing, your shower has a purpose and gives you a valuable result…
Using ChatGPT is not bad for the environment - a cheat sheet

The numbers clearly show this is a pointless distraction for the climate movement

Andy Masley
@simon @Ulli no it isn’t, it’s waved away. It admit it could be useless and then says even if it is useless that’s ok because so are many other things we like. I can trust my shower, I can’t trust what they are giving us. If my shower changed to hydrochloric acid once I’d drop and never look back.

@passwordsarehard4 @Ulli you are making a slightly different argument there

The piece argues that it's OK spending minimal energy on things that are useless (which the author and myself both believe or to be the case for LLMs)

It looks to me like you are arguing against spending energy on things that are actively harmful

@passwordsarehard4 @Ulli and yes, if there were no way to use LLMs that did not actively harm the user I would support discouraging their use of even outright banning them, independently of their energy usage

I do not believe it is the case that all uses of LLMs actively harm their users - I think they require thoughtful application and we need to work hard to help people understand their many limitations and flaws

@simon @passwordsarehard4
Of course!
Everybody things he could be smarten then anybody else while he is using a LLM for things, a LLM is not made for…
🤣
@simon
Yes, it „is very well constructed" and simply wrong in so many ways.
I just take the numbers from the Article, if his LED´s are wasting Energy for 0,40$ a month, and depending where he lives that would mean he is using them very very rarely, it would be about 3.200.000.000$/Month, if all of the at least 8 Billion people in the world would do the same!
1/
@simon
Every Month!!
With the average price per kWh in the US, this would be an amount of 21,3 Twh/Month. That´s about the produced energy of all US Nuclear Power Plants for 10 Days, just for the usage of this small LED, if everybody would do it!
2/
@simon
Additionally it is not a game of „I do this or that“, it is about protecting the environment by all people.
You can´t argue that you could waste a specific amount of energy because someone else is doing the same.
That is not how it works!
3/
@simon
Additionally, the author is completely missing that he is not paying ChatGPT with the 20$ he is sending them, but with his personal and/or (confidental!?) business data he is feeding into the system!
4/
@simon
The owner of Perplexity said in a recent interview, that he is about to release a new Browser, not to help people, but to get more and better data about the users.
The 20$ are only there, to make people think they are paying for the service with their money, and to get more personal information about them through the payment process.
5/
@simon
It is enough money to convince people that they are paying for a service, and that they are not the payment theirself, and it is less enough so that only few people would not use the service because of the price.
6/
@simon
And after all, the results you get from a LLM are pure luck!
Those systems did not even know that they should answer questions, or solve problems, nor are they „thinking“!
They are just placing one letter after the other, without any knowledge about the context.
7/
@simon
There are studies for example with Perplexity (using ChatGPT) and controlled texts as data input, and a Failure rate of 93%! 93%!!!!
Even if you would simply guess an answer, or ask an 8-Ball, you would get a better and more reliable result than that.
So the User is either forced, to verify the answer, EVERY ANSWER, or he is getting simply wrong results, without even knowing it!
8/
@simon
That is completely worthless, and just adds more stupidity into the world!
And the worst you could do is using LLMs for coding!
You will have a really hard time, if your code becomes buggy, you have to find the problem, but you have no idea why the code is like it is, because it is not written, and developed by yourself.
There is almost nothing harder, than to debug some others code...
Sorry!
9/End

@Ulli one of the most important skills for making effective use of LLMs for coding assurance is being *really good* at code review

Engineers with great code review skills (and who don't try to avoid reading other people's code because they prefer to write their own) can get a whole lot more value out of LLMs