“AI sceptics” who work on policy and education continue to overestimate the utility of LLMs—portraying it as a potential revolution even as they warn against overhype—simply because they can’t see that generating seemingly coherent text has very little economic value
LLMs are not automation. That would have economic value. The generated text artefacts look coherent and we, as a society, don’t even value the coherent kind. We underpay most forms of writing. Even code has no inherent value (see open source) outside of the associated integration and expertise
Between diffusion models and LLMs, tech has inflated a trillion dollar bubble around automating activities that have immense social and cultural capital—writing, art, photography—but little actual capital. It’s harmful to education, culture, and society with minimal overall benefit

More and more, generative models are looking like productivity tobacco. Promoted by biased research, it’s addictive, harmful, and the little benefit it has (nicotine is a somewhat effective ADHD drug, for example) cannot outweigh the fact that it’s hurting us all, directly and indirectly.

This shit is already turning out to be one of the most harmful tech innovations of the 21st century. It needs to be regulated at least as much as tobacco, if not banned outright from most economic spheres

@baldur "productivity tobacco" oh I am so stealing this term.

@rysiek @baldur

I have also seen it described as the new asbestos, for the damage it does, and because we'll be digging it out of everything for many years.

@davidtheeviloverlord

I wish it'd be as easy to "dig out" as asbestos. It's rather digital microplastics, and already in our brains.

@rysiek @baldur