Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.

EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful

https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/

Acting ethically in an imperfect world

Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mode of transportation, you want to eat vegan but are […]

Smashing Frames

@tante

That doesn't seem to be the best idea @pluralistic

AI and LLM output is 90% bullshit, and most people don't have the time nor the patience to work out which 10% might actually be useful.

That's completely ignoring the environmental and human impacts of the AI bubble.

Try buying DDR memory, a GPU or an SSD / HDD at the moment.

@simonzerafa @tante

What is the incremental environmental damage created by running an existing LLM locally on your own laptop?

As to "90% bullshit" - as I wrote, the false positive rate for punctuation errors and typos from Ollama/Llama2 is about 50%, which is substantially better than, say, Google Docs' grammar checker.

@pluralistic @tante

Of course, I am speaking in generalities.

Encouraging the use of LLM's is counterproductive in so many ways, as I highlighted.

Pop a power meter on that LLM adorned PC and let us all know what the power usage looks like with and without your chosen LLM running on a typical task 🙂

That's power that generated somewhere, even if it's with renewable energy.

The main issue with LLM's is that they don't encourage critical thinking, in a world which is already suffering from a massive shortage.

@simonzerafa @pluralistic @tante

Pop a power meter on that LLM adorned PC and let us all know what the power usage looks like with and without your chosen LLM running on a typical task

challenge accepted! :D

my laptop uses about 6w per hour when idling, and 25w when playing games or running inference

I'd attribute the difference as about 19w per hour of inference

my 900W microwave uses 15w per minute

so microwaving a frozen burrito for two and a half minutes is equivalent to two hours of inference (or games) on my laptop

also, that burrito was frozen. refrigerator wattage varies widely, but an average hourly running wattage of 150w is nominal

at 150w the freezer takes almost 8x watts per hour more than the laptop inference, and the freezer runs 24/7/365!

most of my inference tasks complete in about 30 seconds at about 0.16 watt per inference job. thats almost 940 inference jobs (assuming 30s average) equivalent to 1 hour of refrigerator running wattage

@memoria @simonzerafa @pluralistic @tante I don't think the majority of people use a computer for LLM operations that can be powered by a small fits-in-a-window solar panel.