I’ve often made the point that generative AI is an amazing tech much like asbestos is an amazing material: they have qualities that feel like genuine miracles but at a human cost so high that broad adoption is only possible if human life is devalued beyond what has been acceptable up until now

But much of the adoption of generative models doesn’t come from the few it does well, but is driven by those who do not understand the job they’re replacing.

Copywriting, illustrating, editing, or coding, the work that the model is doing simply isn’t the work the human was doing

David Gerard links to a good example of this happening in press releases https://circumstances.run/@davidgerard/113958438745025540

The "AI" replacement simply isn't doing the job it supposedly replaces

David Gerard (@[email protected])

so i just found out why the press releases i get are such cloying dogshit and why i increasingly just block the senders in gmail https://muckrack.com/blog/2024/10/16/using-ai-pitch-generator

GSV Sleeper Service
If we’re lucky, managers and execs might realise months after the fact that they’ve sabotaged their organisations by adopting these tools, but that’s unlikely because there’s plenty of economic “cover” going on. They won't realise the organisation is doing badly because of their mismanagement (their persistent overconfidence doesn't help), but because of tariffs or whatever
So, I just don’t see how we’re going to avoid a wholesale deterioration of the effectiveness of organisations, institutions, and companies in Europe and the US. The few that avoid using the tools because the bubble makes the tools too risky to adopt still have to use the services and products made by the rest
@baldur I see your point, and share your concern, but I think you're overestimating the interest in these tools outside a fairly narrow scope. There just isn't an effective consumer/nonindustrial use case here. As the hype wears off that's going to become painfully clear.
@baldur I think a better comparison is voice assistants. For some people, like the visually impaired, it's an incredibly useful tool. For most, it was a novelty they forgot about, in part because the tech wasn't really there.

@Dseitz @baldur

It's a real shame, that with all the "improvements" in AI, that Voice assistants have gotten so much worse over the last few years.

It's as if technology, products, and user experiences actually have nothing to do with each other.

@bigvalen @Dseitz @baldur I think it really speaks to the fact that these are technologies that require constant care and thus probably aren't ideal for private enterprise.

@Dseitz @baldur @bigvalen they got switched over from more classical algorithmical operation (which one can actually debug and bugfix without breaking something else) to blackbox eso-fascist "AI" because "everyone does it" and likely cheaper (barely viable product for much less monetary effort) instead of to actually help people.

This is everywhere. Assistive tech, search engines, websites in general, äpps, … 🤮

@baldur I'd disagree. There are jobs, and press is one of them, where you rather swiftly experience the consequences of shoddy work.

@baldur I know someone who works for a team that generates potential customer facing content. (Sorry to be coy, mildly paranoid). Their immediate mgt INSISTS that everything be generated with Chat GPT first and then refined. The model is generally wrong but this person gets grief if they take the time to rework it properly so they're shipping mediocre content.

People trusting the snake oil over actual subject matter experts is a problem.

#GenerativeAI #RaceToTheBottom

@baldur asbestos, CFCs, lead in petrol

@joelanman @baldur I love all of these examples! They're great to throw back at the people who claim, "Well, AI is coming, you can't stop it, so you might as well get used to it."

No. We have a long and growing history of abandoning technologies that are shown to be harmful. Nothing is inevitable. We can stop AI, just like we stopped using CFCs, asbestos, and lead.

And we must.

@baldur that's also my battle, as a veteran IT person... starting by my own relatives around me, especially the young ones*. When I started the university studies in Data Processing technology, back in 1986, and understood how much harm would programming alone, cause to the overall employment market worldwide, it did temporarily freeze my spine... but I was way too young and needed to hold on to something for the future. At that time, IT was the future...
@baldur best comparison I've read

@baldur I like that, makes sense

Decades later we're still cleaning up asbestos left and right..

@stux @baldur it was already well known by mid 20th century the stuff was bad for you (although suppressed by those who profited from it), and there were warnings on all the products by 1970s/80s (I remember seeing them when I went to DIY stores with my dad), but its use was already entrenched in significant market niches such as vehicle brake linings and ceiling tiles and it took until late 90s/00s before it was /finally/ banned (UK in 1999, about 2005 for EU wide ban)
@baldur 💯 right on mark, it is mostly driven by dismissal of extreme costs (typical for hyper investment approach) and utter disqualification of decision makers who don't know what they manage (or actually don't manage)
@baldur Yeah – I had a similar thought last year, which I put into a blog post called "Lead Paint Is Amazing" 😛 : https://havn.blog/2024/05/17/lead-paint-is.html
🌱 Lead Paint Is Amazing

On “Usefulness” and “Harmfulness” …

@havn The lead paint analogy is a good one. 🙂 👍
The displacement of jobs by robots and AI is not really the real problem. On the contrary, this used to be a part of utopias, not dystopias. The problem is people's refusal to adapt our society and economy to this. It cannot be that only a few people benefit from automation, otherwise the economy will collapse and violent conflicts will ensue. An unconditional basic income would be the first step to respond to this, but it is far from sufficient.
@baldur That is such a great way to explain it, and so painfully accurate. Thank you.
@baldur I listened to a Planet Money podcast which tried to argue that AI was transformational for the economy. One example was some site that would auto-generate a sports story from your kid's little-league scores. Color me unconvinced.
@baldur from my experience it does have a duality, however it should be used as a tool if you want to continue pursuing a career where ai is now intertwined. I will say it is going to put the most pressure on the photography industry.
@baldur or the environmental impacts
@baldur much like framing _data is the new oil_, when a more apt notion is _data is the new uranium_.

@baldur your "AI is the digital insfrastructure abestos", is the AI counter part to the bitcoin "running your car 24/7 to solve sudoku you can trade for heroin".

I'm going to use this all the time now, thank you for the analogy

@baldur As someone who is totally blind, I enjoy using artificial intelligence. The only thing it's replacing for me is a pair of eyes that doesn't work. I use Seeing AI and ARx Vision it for ocr, to describe scenes, colours, etc. I also use Novel AI to write, but just for fun. I can write on my own, as I have been doing so since the fifth grade. I just enjoy the collaborative effort. I also like researching things with Perplexity. Many times, it gives me accurate information. When it doesn't, or if it's important, I always double-check its results. But our conversations have taught me a lot and also made me think of other things, leading to more discoveries and research.
@baldur Don't use it in construction! Holds true in both cases.