There used to be a startup where you could buy intentionally thermally inefficient computers to install in your house as radiators. They were networked, and the company would sell off distributed computing power and split the money with you, essentially subsidising your domestic heating.
This should be the model for LLM/ML, no huge data centres, just very cheap electric heating for households. But that won't happen in the current bubble because investment is locked in to a different model.
@_thegeoff I am 100% convinced that the end game for LLMs/similar is only running them locally. It's the only long term cost model that remotely makes sense.

@simonbp @_thegeoff and it's kind of the death knell for generalist models. You can get some perhaps-amusing but not very *useful* outputs from a generalist model that fits on consumer hardware.

Specialist models that run locally are often pretty great, but the economics don't support the "infinite growth" myth, so the market craze is looking directly past this use-case.

@SnoopJ @simonbp On the other hand, you, I and maybe a few thousand other people with a particular interest in <insert ML usage>, say SETI for me, basically fediverse our radiators. Honestly, that sounds like the future. Written by @cstross, until it all goes very weirdly wrong.