My take:

The AI scam is a bubble. It's going to burst, and 99% of the "value" in it will evaporate. (There will be some utility in the 1%.)

There will then be demand for skilled human employees … who will be few, because unused skills atrophy and there's a crimp in the training pipeline.

Wages for those who can do stuff will spiral.

But there'll be a net productivity decline and a recession.

So: the long-term legacy of the AI bubble will be stagflation.
https://toot.cafe/@baldur/114443358373790490

Baldur Bjarnason (@baldur@toot.cafe)

“The AI jobs crisis is here, now - by Brian Merchant” https://www.bloodinthemachine.com/p/the-ai-jobs-crisis-is-here-now > The unemployment rate for recent college graduates is unusually high—and historically high in relation to the general unemployment rate "AI" is killing entry-level jobs, which means that a few years down the line companies won't have senior labour to hire. This also shows that talk about “AI literacy” and “AI skills” is a joke. You’re not gonna need any skills if employers aren’t employing in the first place

Toot Café
Also, if (as Satya Nadela claims in public) 30% of Microsoft's software is LLM-generated, then we can expect the next couple of generations of Windows and Microsoft Office to be unelievably bad. Not just enshittified for advertising and profit: but full of really idiotic security holes and bugs inserted by LLMs that were trained on their own toxic efflux by "developers" too de-skilled to understand what they were doing.

@cstross

I think 30% of the code I wrote in the last year was LLM generated because 75% of the code I write is unit tests for the other 25% and if there's one thing LLMs are really good at, it's repeating the setup for a test case after you wrote one by hand.

@gbargoud @cstross You’re missing a significant opportunity to simplify and generalize the test setup when you do this.

@donaldball @gbargoud @cstross

Indeed. Does your test framework not support inheritance, or a single setUp/tearDown method pair for a suite of related tests?

If so, why isn't the LLM doing that instead?

If no
(1) your framework is bad and
(2) let's do a cost/benefit analysis of an LLM over ctrl-c/ctrl-v

@trochee @donaldball @gbargoud @cstross When you find out that the failing test in Test::Thing::DoThing is called "GivenNullFactory_WhenCallingDoThing_ThenNullFactoryError”

And contains the code:

Thing thing;
shared_ptr<IFactory> nullFactory(nullptr);
ExpectError(thing.DoThing(nullFactory), NullFactoryError);

the relief can be overwhelming. "Oh joy, I don't have to learn the test framework the last developer on this project used in order to fix the problem that has just shown up."