My take:

The AI scam is a bubble. It's going to burst, and 99% of the "value" in it will evaporate. (There will be some utility in the 1%.)

There will then be demand for skilled human employees … who will be few, because unused skills atrophy and there's a crimp in the training pipeline.

Wages for those who can do stuff will spiral.

But there'll be a net productivity decline and a recession.

So: the long-term legacy of the AI bubble will be stagflation.
https://toot.cafe/@baldur/114443358373790490

Baldur Bjarnason (@baldur@toot.cafe)

“The AI jobs crisis is here, now - by Brian Merchant” https://www.bloodinthemachine.com/p/the-ai-jobs-crisis-is-here-now > The unemployment rate for recent college graduates is unusually high—and historically high in relation to the general unemployment rate "AI" is killing entry-level jobs, which means that a few years down the line companies won't have senior labour to hire. This also shows that talk about “AI literacy” and “AI skills” is a joke. You’re not gonna need any skills if employers aren’t employing in the first place

Toot Café
Also, if (as Satya Nadela claims in public) 30% of Microsoft's software is LLM-generated, then we can expect the next couple of generations of Windows and Microsoft Office to be unelievably bad. Not just enshittified for advertising and profit: but full of really idiotic security holes and bugs inserted by LLMs that were trained on their own toxic efflux by "developers" too de-skilled to understand what they were doing.

@cstross
It's either a lie or MAYBE they include autocompleted IDE stuff where it can figure out that that half written word matches the pattern of the last 3 lines and you press tab to finish.

Anything other than that last example and I don't know how you could ever safely use MS tech again.

When I have time my gaming rig (and my kids) will be going penguin-style.

@psaldorn I'm staying on Apple *for now* (but bought a burner PC notebook last year to re-learn my atrophied Linux skills for when Tim Cook retires: exit strategy, probably Framework)

@cstross @psaldorn ... and people (read: work) still think that moving to Windows 11 will be a good idea.

I've got a 12-year old laptop and an even older desktop, both of which M$ declares too old to upgrade automatically. Current plan is to keep the desktop on 10 as long as I can and sacrifice the laptop (which I only use for contact while on holiday) to the Linux gods.

Only thing keeping me on 10 in the first place is that Access is still the only easily usable database for a non-techie...

@Daveosaurus @cstross @psaldorn Don't worry. Linux will be a learning curve, absolutely, but it will even install and run MS Office if you so desire.

Use a Wine manager like Bottles or PlayOnLinux to install Office in its own 'prefix'. Use search or an AI like Perplexity to solve issues if there are some and learn about your system at the same time. Plenty of info online but it can be tricky to find exactly the right recipe for your case.

Linux Mint is a solid bet if you come from Windows 10.

@psaldorn

It's probably worse than that: much worse.

My employer asked me to try GitHub Copilot, which, of course, is a Microsoft product. Copilot suggests not just the ends of words but whole statements and, in many cases, stanzas of up to a dozen lines.

Pasting that much code into a program is always wrong. Even if the code does the right thing (which never happens), the programmer shouldn't accept that degree of duplication. Duplication¹ should always be factored out; otherwise, the program becomes more expensive to maintain and is more likely to incur bugs under maintenance. (Even if you've updated nine copies of a piece of code and fixed the same bug nine times, you never know if there's a tenth copy that you've missed.)

So Copilot and similar products are a recipe for bugs now and buggier, unmaintainable code in the future.

I eventually advised my employer that they shouldn't roll it out and that, if they did, I wouldn't use it. So far, they've not mentioned it again.

¹ See what I did there?

@cstross

@cstross This is one of the reasons that I, as a DevOps person, refuse to use “AI”. I want to keep my own skills sharp for the coming time when I can market them.
@cstross that would explain Microsoft Teams.
@NotTheLBCGuy @cstross @VulpineAmethyst I'd say it explains the latest version of windows 11. _Nothing_ explains Teams... ;)
@chloeraccoon @NotTheLBCGuy @cstross @VulpineAmethyst Designed for internal use, rolled out as a product, but the "for internal use" assumptions weren't revised?

@cstross

I think 30% of the code I wrote in the last year was LLM generated because 75% of the code I write is unit tests for the other 25% and if there's one thing LLMs are really good at, it's repeating the setup for a test case after you wrote one by hand.

@gbargoud @cstross you probably want to invest in factorizing your setup if it needs so much repeating that an LLM can copy paste it for you

@jhelou @cstross

Clarity and easy to verify correctness are more important in unit tests than elegance or conciseness.

This means a lot of things that can be and are handled with clever tricks in production code end up repeated in the tests

@gbargoud @cstross You’re missing a significant opportunity to simplify and generalize the test setup when you do this.

@donaldball @gbargoud @cstross

Indeed. Does your test framework not support inheritance, or a single setUp/tearDown method pair for a suite of related tests?

If so, why isn't the LLM doing that instead?

If no
(1) your framework is bad and
(2) let's do a cost/benefit analysis of an LLM over ctrl-c/ctrl-v

@trochee @donaldball @gbargoud @cstross When you find out that the failing test in Test::Thing::DoThing is called "GivenNullFactory_WhenCallingDoThing_ThenNullFactoryError”

And contains the code:

Thing thing;
shared_ptr<IFactory> nullFactory(nullptr);
ExpectError(thing.DoThing(nullFactory), NullFactoryError);

the relief can be overwhelming. "Oh joy, I don't have to learn the test framework the last developer on this project used in order to fix the problem that has just shown up."

@gbargoud
Why are you repeating the setup code for each test case? 🤦‍♂️
@cstross

@markotway @cstross

Pick one or more of:

* The tests are for the different possible error conditions when setting up the library so they need to be different.
* The setup is closely tied to the result for the test so keeping them close to each other and repeating a lot of it makes each individual test more readable.
* The setup is different for each test case for other reasons in a predictable way that an LLM or copy paste and complex regex can handle.

@cstross there is so much transpiling happening these days, if you count that, you can say "software-generated" code and use it as a marketing blurb for your LLMs, the use of which may be harder to quantify internally than estimate how much code came out of pipelines with some code generation going on
@cstross Unless it is mostly unit tests.
@whvholst @cstross which would be even more dangerous lol

@cstross I like that we're attending to higher level schemes around the writing of code.

The purpose of most software seems a bit dull anyway, perhaps sludgeware is able to evolve faster?

No no no no no , obviously we're supposed to discover (start believing in) some kind of structure, an atom, something that opens a new level of understanding, expression, and how to avoid errors.

Or so I thought, until my project to build stretched on for a little too long... Come back, mysterious sprawling code project... It was to facilitate various vibes, delivering email to states of mind, essentially.

When you're building a platform for creation, examples abound of struggles that could be slightly adjusted.

@cstross yes, it its true, but I think that is also a clear statement ONLY for the hypecult investor vibecodingkids crowd. everyone who knows a bit about that stuff knows that it is trash pr talk.

the geniuine scary thing I see here as well : microsoft gives a **** about how they are perceived by "the serious crowd". what does that mean.

@cstross The phrase Nadella used was that 30% of their code was "written by software". He has a product to sell, so I figure that he would have said "written by Copilot" (or "written by AI") *if* 30% of their code was LLM-generated.

"Written by software" can cover everything from old-fashioned autocomplete, snippet engines and the XML-to-boilerplate generators so popular in the C# world, through bespoke code generation pipelines, to LLM-based approaches. 30% doesn't seem like an unreasonable figure at all then. I haven't got an exact figure, but I've read that a majority of the 6-million-line AMD GPU drivers in the Linux kernel is deterministically generated from a spec.

(In the industry I work in, bespoke deterministic code generators are pretty common. Pretty much all the really brainless code is *already* machine-generated and has been for years, just not by an "AI".)

@cstross

I think all you describe is already the case.

About "AI is a scam": the corresponding book by the knowledgable @emilymbender is coming out in 2 weeks time. I already preordered it.

https://www.waterstones.com/book/the-ai-con/emily-m-bender/alex-hanna/9781847928610

@cstross I too find the claims about LLMs doing original work non-credible. However, I used to use "lint", so having a program do something non-original, like critiquing my work, is familiar.

In a September 2024 ACM, article, Advait Sarkar also noticed how bad LLMs are at anything creative, and instead suggested we use them for things they’re good at, predicting what humans would say. Especially if they were asked what a critic would say.

He wrote “let’s transform our robot secretaries into Socratic gadflies”, and proposed an “AI Provocateur”, to look at articles like his and tell him all the things wrong with it.

The whole article is at
https://cacm.acm.org/opinion/ai-should-challenge-not-obey/ and my experience with it is at https://leaflessca.wordpress.com/2024/11/01/and-on-the-third-hand/

TL;DR?
They're better at critiquing english-language articles than they are code.

AI Should Challenge, Not Obey

Communications of the ACM
@cstross looking at their CVE history I feel like that's been the case for the past 20 years already, with absolutely no improvement whatsoever. Wonder if it's actually possible to get any worse. 😳

@cstross

The last couple of major releases of Illustrator have had a *lot* more weird little things breaking than usual and I really suspect this is because 2024’s focus on text-to-vector slop tools also included a lot of encouragement to just throw interns with a lot of chatgpt tokens at this immense 30 year old codebase and fire the few people who have any idea of how it actually works.

@cstross It would explain why “New Outlook” is such utter dogshit compared to “Classic Outlook”
@cstross I haven't heard of any game companies dumb enough to do that. But there are already so many layoffs in the industry they think they can be selective and not pay for things like visas. :(

@cstross Is like somebody asked them "What's your plan to make Windows even worse?" and that will be the answer.

Thanks Ilúvatar I use Linux for a very long time already. Never wanted to get back.

These public declarations will also become very fun if the courts rule that copyright propagates from training sets to models to outputs (I suppose we are just a cartoon mouse away from that). With this, the amount of GPL put into Windows is not trivial...

@cstross @gleick I would guess most code written at MS these days is not for Windows.
@cstross Is it not already? Almost every day someone at work has some stupid bug in outlook.
@cstross That would explain Teams
@cstross he said "programmatically created" or something along those lines. Very vague and might include standard boring old templating stuff.
@cstross
So just like the last few generations of Office and Windows?

@cstross I'm sorry but when were any Microsoft's offerings not unbelievably bad?

Been hating MS since DOS 3.1 💾

@sleepytako Microsoft Word 5.1a for MacOS in 1987 pretty much nailed the target ("WYSIWYG Word Processor") to absolute perfection. Then they screwed the pooch with Word 6. (The original team lead left, then politics ensued and they ported the Windows version to MacOS—it was bloated, bug-ridden rubbish that trampled on UI guidelines and was barely usable. The planned Word 6 for Mac was going to be like Word 5.1 only with Word BASIC macros.)

@cstross his statement was full of weasels. 30% generated by "software" on "some projects".

i wonder if someone told him they were using code generators and he assumed that must be an LLM.

@davidgerard @cstross

Everything I've heard from people inside Microsoft paint it as a workplace where information is carefully prevented from reaching senior management. You don't tell your boss what's actually going on, that's not "agile" enough, you tell them what they want to hear; then they tell their boss what they want to hear, and so on.

Honestly, if Nadella even wanted to make a specific and accurate statement without weasel words, I'm not sure he would have the information about his own company to be able to.

(I am not a Microsoft insider, this is secondhand, I am happy to be contradicted.)

@davidgerard He's a CEO and this was a public statement. I'm pretty sure he can't even fart in public without the lawyers pre-clearing it—if he mis-speaks anything he says can end up as a lawsuit that at worst costs the company billions.

@cstross @davidgerard Granted, but there’s a lot of leeway between what the CEO understands and what programmers understand. It could be “generated and then fixed up”.

From my limited experience (with PySide6 output from an LLM), if you don’t fix it up then it’s not going to compile even.

@arafel @cstross it could be *anything at all*, is the point.
@cstross I want to know more about this fart pre-clearance from lawyers. Is there a Bristol Fart Chart?
@jwz @cstross some CEOs are extremely adept at threading the needle on the spot in real time. You can say any old shit as a forward-looking statement, for example.
@davidgerard @jwz Which is itself a sociopath trait: knowing exactly what the mark wants to hear and force-feeding it to them without remorse.
@cstross @jwz see how good Elon Musk is at this, for example. I first noticed his skill at this at the WeRobot event last year, where he came out with the most fabulous future plans and flawlessly prefixed every one with forward-looking-statement words. He may publicly disrespect the SEC, but he knows how to speak public CEO.
@cstross
If you cannot copyright the output of an LLM, then that means 30% of Microsoft software cannot be said to belong to them
Windows 11 users reportedly losing data due to Microsoft's forced BitLocker encryption

Microsoft made a big change regarding BitLocker encryption on Windows 11 24H2 and apparently it is leading to users losing data.

Neowin

@cstross E.g. "same old, same old".

Meaning: Whoever had some serious collisions with quality and security of MS code for past say 30 years, then for them it is "sad but usual".

(MS cares about good code only when "conquering new markets". Once dominant, rent extraction is the only priority and "good code" goes against that priority.)

@cstross One of the ways to understand the post-1970 economic shift is as a generational change from owners who viewed skilled employees as a capital asset, hard to acquire and difficult to replace, to owners who view labour as a cost and bitterly resent that they have to pay anything for labour. They tend to view labour as a conspiracy to extort money rather than as a transfer of value and doubt that any value exists to be transfered.

AI is perfectly crafted to target such people.

@graydon @cstross It’s wild that anyone could look at what Jack Welch did to GE and see it as anything other than a cautionary tale, an obvious symptom of a failing system.

@donaldball @cstross To quote Wikipedia,

> When Welch retired from GE, he received a severance payment of $417 million

"Did I get money?" is the only success criteria a mammonite will use, and mammonism is plausibly the majority religion of these times.

@cstross I use NotebookLM for example, it has real value in my view at least, I don't get the polarization where people try to claim that current AI is totally useless, a scam etcetera, that is simply not true and doesn't contribute to a sensible and responsible discussion about the use of this technology. At the same time i do not want to belittle or ignore the serious issues and problems with AI. They are certainly there.
@cstross I call it „ops crisis“. Cloud and AI will make it really bad, but you could see it coming before as education never really taught infrastructure.

@cstross
Isn't it pretty much the same with every tech bubble? Someone finds a real use for the tech in a tiny number of situations, but the overwhelming majority of the claims turn out to be empty hype.

A bunch of tech people get laid off, tech executives make even more money, wash, rinse, repeat.

The AI tech bubble is just bigger than the other tech bubbles, so the damage will be greater when it bursts.

@tofugolem It … depends? One of the big tech bubbles in the UK, circa 1826-1850, was the railway build-out: a lot of investors lost their shirts BUT WE GOT A RAILWAY NETWORK OUT OF IT. Same with dot-com 1.0: stuff like pets.com failed spectacularly, but we also got large-scale public broadband as a largely-invisible side-effect.

@cstross
Fair enough. I don't think this will be like that. As the article points out, this economic damage is on top of the bullshit coming from the Trump administration, and it's starting to get ugly.

I cannot help but think that not so long ago, those right-leaning Dems like Harris were singing the virtues of AI knowing full well what it would do to white collar jobs.

@tofugolem I think most politicians are so totally tech-illiterate that they're clueless about how the AI hype cycle—and silicon valley hype cycles in general—work. In other words, perfect suckers. (Harris is obviously a competent politician and prosecuting attorney—but a CS grad? I think not. And so on, for the rest of them.)