The only two viewpoints on generative AI that get any play among tech punditry are:

1. AI is a lever that helps people do better
2. AI is effective automation that will replace people, or be a threat to them.

The third viewpoint, that AI tools are kind of shit and, if used in their current form at scale by corporations and governments, will “enshittify” large portions of our society, doesn’t seem to register with them at all.

@baldur I think that's because we are at the beginning of a new hype cycle. And compared to the previous one (crypto currencies and "Web3") I can see some lasting and useful applications in this one.

Said that, the future is STARTING now, we definetely are not there yet. :-)

@martinc The problem is that if AI vendors don't see the flaws in existing systems, they aren't likely to fix them. Instead they'll just focus on making them cheaper, faster, and bigger.
@baldur agreed, but isn't that part of every hype cycle? And if making them just bigger does not really improve things, aren't the market forces going to "correct" this, since training the large models is not exactly cheap?

@martinc Microsoft and Google together basically control over 99% of both the office productivity and search markets.

Since they've both all in on generative AI and have effectively the same strategy, there is next to nothing in the market that can shift them either way.

@baldur @martinc The biggest problem is that the people making those calls don't understand the underlying technology. It's a symptom of the true root cause -- the MBAification of everything. They focus solely on "investor value" with short term gains and have absolutely zero interest in any long term strategic plan other than platitudes. What will push neural networks, etc. to being better will be industries that use the technology as a tool and not and end in itself, like biotech, drug discovery, material science, engineering design. Things that can leverage the technology not just for hype/bullshit generation, but actual physical products that have to actually work or not.
@GradientU0 @baldur @martinc You’re so right about this it hurt to read. It’s something that is so true of where I work and of tech as a whole…but then, it’s like Hollywood. It’s the money men who run the show, not the talent. And we don’t even have unions.

@GradientU0 @baldur @martinc oh no, they know the dangers... They literally published research papers on them.

They just don't care.

They won't be shut down when people die.
No one will go to jail when people die.
No investor will pull their money when people die. Not like they need investors anyways, they both print money.
They won't have to pay the people who's data they use.

And that's the real problem. AI is consequence-free for them.

@baldur @martinc Microsoft and *especially* Google are known for dropping something like a hot potato if it doesn’t take. So I don’t worry too much about them being quick to jump on the bandwagon. The question is whether people are still interested after the hype has died down, and I don’t see FAANG having much control over that.

@martinc @baldur My guess is no. One example:
LLMs will be used to, among other things, fill the www with ”SEO” crap, which I predict will render the net largely useless. Like ordinary SEO on speed.

Quality won’t be, and has never been, a driver in that market.