@wilbowma This is the best assessment I've read of how much Claude Code uses: https://www.simonpcouch.com/blog/2026-01-20-cc-impact/
Electricity use of AI coding agents | Simon P. Couch

Most of the discourse about the environmental impact of LLM use focuses on a ‘median query.’ What about a Claude Code session?

Simon P. Couch

@samth @wilbowma I'm a little confused by this one. This appears to be energy cost of inference, but I wasn't aware anyone was actually concerned about that? I thought the main concern was energy cost of training (and then throwing away the trained model and training a new one, forever).

It seems patently obvious that inference is not so bad, and can in the future be made orders of magnitude more efficient.

Am I missing something here? Admittedly I did not read the article super carefully.

@samth @wilbowma It wonder if the problem might get solved automatically when the money runs out, if we start using models for longer without replacing them.

In the past it seemed like this would not be possible because the models need to be trained with up-to-date information. That may be true for the *chatbot* models, but my experience seems to indicate that the agents don't need to be trained on languages and libraries, etc. Point them at a repository and they learn it fresh, even if it's in a weird language.

@jonmsterling @samth @wilbowma The training cannot be separated from the inferring, because they will never stop retraining with every new batch of fresh stolen material to train upon.

This tech is not necessary, not revolutionary, and not useful enough to justify the real-world costs (that other people are paying). The whole cost is the whole cost. People are kidding themselves if they only look at the cost of inference: like a car driver saying "well *my car* only emits.."

@seachaint @samth @wilbowma I agree with you. But what I'm saying is that it may not be necessary to keep training new models, for the reasons I said. Now, I don't think that will stop people from wastefully training new models. I'm just speaking about the technical justification of new models, or lack thereof.