@wilbowma @ionchy two points I didn't see addressed explicitly in either blog post (yours or ionchy's), though I think both are related to ionchy's analytics argument:
scale. individual actions maybe do not contribute to real problems, but if tons of people are using LLMs, this may substantively contribute to real problems. for example: individual resource consumption may be negligible, but at-scale it is not; individually produced slop is probably not a big deal, but replacing the majority of human output with slop is; etc.
use is training. the early models were not as good at doing simple coding tasks, but new models are (seemingly) pretty good at those tasks. I have to imagine they're using interaction logs as feedback to improve the models, which is another avenue for giving the "AI" industry more power through mere use. (this is also exacerbated by scale.)
I think this second point is also part of why LLM adoption at big companies is being forced on employees: the more the workers are made to use the homunculus, the better it can do their jobs, leading to more employee displacement and an improved profit margin.