The only two viewpoints on generative AI that get any play among tech punditry are:

1. AI is a lever that helps people do better
2. AI is effective automation that will replace people, or be a threat to them.

The third viewpoint, that AI tools are kind of shit and, if used in their current form at scale by corporations and governments, will “enshittify” large portions of our society, doesn’t seem to register with them at all.

@baldur I think that's because we are at the beginning of a new hype cycle. And compared to the previous one (crypto currencies and "Web3") I can see some lasting and useful applications in this one.

Said that, the future is STARTING now, we definetely are not there yet. :-)

@martinc The problem is that if AI vendors don't see the flaws in existing systems, they aren't likely to fix them. Instead they'll just focus on making them cheaper, faster, and bigger.
@baldur @martinc I honestly think it’s even worse than that, they *know* about the flaws and the very concrete harm that comes from releasing those systems in such a state, they actively choose not to care because no authority has yet stepped in to force them into caring, too few people recognize those flaws and, more importantly, doing so makes them money — and AI companies laying off their ethics teams sort of supports this, in my opinion
@zanna_92 @baldur @martinc exactly this — they don't care about harms, only extraction, and preventing regulation is key to keeping the extraction going

@susankayequinn @zanna_92 @baldur @martinc

It might even be worse. Some people might see a benefit in enshittifying everything. Then their customers would have to pay, a lot, for clean information.

But perhaps we don't need the hypothesis of actual malice, just greed.

@rcorless @susankayequinn @zanna_92 @baldur @martinc

Those ain't mutually excluding factors btw.