More and more, generative models are looking like productivity tobacco. Promoted by biased research, it’s addictive, harmful, and the little benefit it has (nicotine is a somewhat effective ADHD drug, for example) cannot outweigh the fact that it’s hurting us all, directly and indirectly.
This shit is already turning out to be one of the most harmful tech innovations of the 21st century. It needs to be regulated at least as much as tobacco, if not banned outright from most economic spheres
I wish it'd be as easy to "dig out" as asbestos. It's rather digital microplastics, and already in our brains.
@baldur : I always use that analogy too.
We live in a world where it is considered normal to destroy your health, destroy the health of people around you while polluting heavily the environment.
And simply asking "can you please not smoke near me" is seen as, at best, very annoying.
For something which is well known for the last 50 years AND very intuitive.
But, yet, smoking is still authorized.
So what do we expect from such a society for matter that are just less intuitive ?
@ploum The part about how people get annoyed when you ask them not to smoke is such a good addition to this analogy!
People are using ChatGPT to 'prove' their points all the time on social media now, and when you tell anybody they shouldn't rely on LLMs, they behave as if you're the one who's being unreasonable..
@noboilerplate @baldur I don't think there's a lot of subtlety in the harm LLMs do. The way they lie is obvious, several people have already been driven to suicide by them, and all the infrastructure they need is extremely obviously harmful.
At least asbestos was good at what it did - prevent fires - better than any method we had before we had asbestos. There are plenty of better ways to get information or text than LLMs
@noboilerplate Oh, absolutely! I used to work for a SEO/content marketing company (I got thrown out for not using LLMs enough to write marketing-related articles), and I'm still in contact with some of my former coworkers... All of their clients now want cheap, fast texts, no matter if they're absolute shit or not 😬
(The day after I was thrown out, they literally had a meeting with the content team being like 'We have gone too far into the direction of 'quality', we need to focus more on 'quantity''... I'm really glad I got thrown off that ship when I did, I even got 2 months of paid leave because apparently I was enough of a disturbance that they rather paid 2 months of no work for me than let me work until the end of my contract 😂 )
The wealth always tried to diminish cost of Labor.
With AI, they finally can push everyone to the brink of starvation and own slaves again.
@baldur I really wish the robot people would have more success. If all that capital were dumped into automating recycling, garbage sorting, and solar panel installation, we might make some genuine progress.
The current situation does remind one of 1984 where the books were written by machines while the grunt work was still done by humans. That's the bad outcome.
@baldur "but, but, 'AI' has its use! I use it for..." Whatever!
At scale, it has shown no economic value other than speculative, as an entire industry has been demonstrating on a daily basis for years now... ¯\_(ツ)_/¯
@baldur
> generating seemingly coherent text has very little economic value
But enough about academia!