Passing thought on the Claude code leak, and how messy and wasteful the code apparently is (as many of us suspected):

There’s been a lot of focus on the energy & environmental costs of running LLMs. There are also energy & environmental costs to deploying LLM-generated code.

Similarly, there’s a lot of focus on the kickbacks cloud vendors who fund AI get from LLMs renting their servers. There may be a similar kickback from computationally inefficient LLM-generated software renting their servers.

1/

@inthehands

Lately; I had to ask if a group proposing a machine learning system for a particular research task was actually going to do that, which might save ongoing computing costs; or just stick a chatbot interface and layers of text generator between the users and existing code, which would steeply increase costs and add new failure modes.

So. Yeah.