The only two viewpoints on generative AI that get any play among tech punditry are:

1. AI is a lever that helps people do better
2. AI is effective automation that will replace people, or be a threat to them.

The third viewpoint, that AI tools are kind of shit and, if used in their current form at scale by corporations and governments, will “enshittify” large portions of our society, doesn’t seem to register with them at all.

@baldur I think that's because we are at the beginning of a new hype cycle. And compared to the previous one (crypto currencies and "Web3") I can see some lasting and useful applications in this one.

Said that, the future is STARTING now, we definetely are not there yet. :-)

@martinc The problem is that if AI vendors don't see the flaws in existing systems, they aren't likely to fix them. Instead they'll just focus on making them cheaper, faster, and bigger.
@baldur agreed, but isn't that part of every hype cycle? And if making them just bigger does not really improve things, aren't the market forces going to "correct" this, since training the large models is not exactly cheap?

@martinc Microsoft and Google together basically control over 99% of both the office productivity and search markets.

Since they've both all in on generative AI and have effectively the same strategy, there is next to nothing in the market that can shift them either way.

@baldur @martinc The biggest problem is that the people making those calls don't understand the underlying technology. It's a symptom of the true root cause -- the MBAification of everything. They focus solely on "investor value" with short term gains and have absolutely zero interest in any long term strategic plan other than platitudes. What will push neural networks, etc. to being better will be industries that use the technology as a tool and not and end in itself, like biotech, drug discovery, material science, engineering design. Things that can leverage the technology not just for hype/bullshit generation, but actual physical products that have to actually work or not.
@GradientU0 @baldur @martinc You’re so right about this it hurt to read. It’s something that is so true of where I work and of tech as a whole…but then, it’s like Hollywood. It’s the money men who run the show, not the talent. And we don’t even have unions.

@GradientU0 @baldur @martinc oh no, they know the dangers... They literally published research papers on them.

They just don't care.

They won't be shut down when people die.
No one will go to jail when people die.
No investor will pull their money when people die. Not like they need investors anyways, they both print money.
They won't have to pay the people who's data they use.

And that's the real problem. AI is consequence-free for them.

@baldur @martinc Microsoft and *especially* Google are known for dropping something like a hot potato if it doesn’t take. So I don’t worry too much about them being quick to jump on the bandwagon. The question is whether people are still interested after the hype has died down, and I don’t see FAANG having much control over that.

@martinc @baldur My guess is no. One example:
LLMs will be used to, among other things, fill the www with ”SEO” crap, which I predict will render the net largely useless. Like ordinary SEO on speed.

Quality won’t be, and has never been, a driver in that market.

@baldur @martinc I honestly think it’s even worse than that, they *know* about the flaws and the very concrete harm that comes from releasing those systems in such a state, they actively choose not to care because no authority has yet stepped in to force them into caring, too few people recognize those flaws and, more importantly, doing so makes them money — and AI companies laying off their ethics teams sort of supports this, in my opinion
@zanna_92 @baldur @martinc exactly this — they don't care about harms, only extraction, and preventing regulation is key to keeping the extraction going

@susankayequinn @zanna_92 @baldur @martinc

It might even be worse. Some people might see a benefit in enshittifying everything. Then their customers would have to pay, a lot, for clean information.

But perhaps we don't need the hypothesis of actual malice, just greed.

@zanna_92 @baldur @martinc Big players with trusted brands care about these harms. They have to have hired an ethics team to have fired them. But smaller players like Microsoft are happy to just add it to Bing and ship it, they don't have much to lose in the browser search/knowledge engine space.

There are lots of things that can be done to make generative pretrained transformer's safer though, and it would be good to build up consensus and market demand for them. Watermarks. Citing sources