LLMs are not intelligent but they ARE good for spouting extruded text that mimics the zeitgeist of their training data and if that training data includes the overflowing sewer that is the unfiltered internet you're going to get gaslighting, lies, conspiracy theories, and malice. Guardrails only prevent some of it from spilling over.

Humans base their norms on their peers' opinions, so LLMs potentially normalize all sorts of horrors in discourseβ€”probably including ones we haven't glimpsed yet.

To be clear, that point about LLMs is a criticism of the training inputs. Which as far as I can see are promiscuously hoovered off the public internet by bots (like the ones perpetually DoS'ing the server my blog runs on this year) with zero thought given to curating the data for accuracy or appropriateness.

They generate perfect "answer shaped objects" … from a foul-minded misogynist white supremacist bigot oozing malice against everyone who's not like him.

@cstross

LLMs are the shitbrained version of the ML models that had been accelerating (and transforming entire research areas and industries for the better) until LLMs took up all the oxygen.

To your point, those ML models were built slowly and deliberately using vetted information, which is why they're truly miraculous.

Only this era's humanity would throw away actually revolutionary technology to invest in the image of revolutionary technology instead.

But it's not a pipe, never will be

@johnzajac @cstross

"... those ML models were built slowly and deliberately using vetted information, which is why they're truly miraculous."

There were many wrong turns along the way. A late colleague once gave me a spreadsheet of ML failures. Unfortunately I don't have it any longer, but two failures stuck in my mind ...
1/3

@johnzajac @cstross

2/3
The ML that was shown photographs of mushrooms and told which were poisonous. Unfortunately the data mostly alternated between poisonous and non-poisonous, so the machine learned that the odd-numbered mushrooms were poisonous.

@johnzajac @cstross

3/3 The ML was shown photographs of skin lesions and told which of them were cancerous. The machine learned that having a ruler in the photograph indicated a cancerous lesion.

@TheLancashireman @cstross

Wouldn't it have been crazy if, even though they knew the mushrooms are poisonous, they just shrugged and fed everyone those mushrooms anyway, like "this machine intelligence must know better! lol lmao"?

That's what they're doing with LLMs. As I type this, a healthcare company is implementing a hallucinating shit-tech into your medical records to "summarize" them for your doctor. I expect thousands of people without uteruses to be *shocked* that they're with child.

@TheLancashireman @cstross

When LLMs are 100% correct all the time, never make stuff up, are energy efficient to the point of being less wasteful than an internet search, and are trained on data obtained legally, call me.

Until then, they're immoral, unethical, and are going to destroy the entire internet and then the planet.

@johnzajac @TheLancashireman You don't even have to insist on 100% correctness; just on them being incorrect less often than an equivalently-trained human. (That, right there, is a high bar they can't reach yet, if ever.)
@cstross @johnzajac @TheLancashireman no worries, we will solve that by dumbing down humans, killing education and reducing everyone to the mental capabilities of a snail on coke to make LLMs look better.

@rfc1437 @cstross @johnzajac @TheLancashireman

(squints at Project 2025, & the US Republican party generally)

You say that like it's a hypothetical....

@cavyherd @rfc1437 @cstross @TheLancashireman

Of course, the problem with the entire US fash strategy is that there *is* a real world, and it's *not* the one they're in. By exiling people who know enough about the real world to be effective, they're basically guaranteeing their eventual retirement.

Because you can fake it some of the time, but not all of the time. Eventually, reality asserts its irresistible hegemony.

@johnzajac @rfc1437 @cstross @TheLancashireman

Colbert's "Reality's notable leftward bias," yep.