LLMs are not intelligent but they ARE good for spouting extruded text that mimics the zeitgeist of their training data and if that training data includes the overflowing sewer that is the unfiltered internet you're going to get gaslighting, lies, conspiracy theories, and malice. Guardrails only prevent some of it from spilling over.

Humans base their norms on their peers' opinions, so LLMs potentially normalize all sorts of horrors in discourseβ€”probably including ones we haven't glimpsed yet.

To be clear, that point about LLMs is a criticism of the training inputs. Which as far as I can see are promiscuously hoovered off the public internet by bots (like the ones perpetually DoS'ing the server my blog runs on this year) with zero thought given to curating the data for accuracy or appropriateness.

They generate perfect "answer shaped objects" … from a foul-minded misogynist white supremacist bigot oozing malice against everyone who's not like him.

@cstross

LLMs are the shitbrained version of the ML models that had been accelerating (and transforming entire research areas and industries for the better) until LLMs took up all the oxygen.

To your point, those ML models were built slowly and deliberately using vetted information, which is why they're truly miraculous.

Only this era's humanity would throw away actually revolutionary technology to invest in the image of revolutionary technology instead.

But it's not a pipe, never will be

@johnzajac @cstross

"... those ML models were built slowly and deliberately using vetted information, which is why they're truly miraculous."

There were many wrong turns along the way. A late colleague once gave me a spreadsheet of ML failures. Unfortunately I don't have it any longer, but two failures stuck in my mind ...
1/3

@johnzajac @cstross

2/3
The ML that was shown photographs of mushrooms and told which were poisonous. Unfortunately the data mostly alternated between poisonous and non-poisonous, so the machine learned that the odd-numbered mushrooms were poisonous.

@johnzajac @cstross

3/3 The ML was shown photographs of skin lesions and told which of them were cancerous. The machine learned that having a ruler in the photograph indicated a cancerous lesion.

@TheLancashireman @cstross

Wouldn't it have been crazy if, even though they knew the mushrooms are poisonous, they just shrugged and fed everyone those mushrooms anyway, like "this machine intelligence must know better! lol lmao"?

That's what they're doing with LLMs. As I type this, a healthcare company is implementing a hallucinating shit-tech into your medical records to "summarize" them for your doctor. I expect thousands of people without uteruses to be *shocked* that they're with child.

@TheLancashireman @cstross

When LLMs are 100% correct all the time, never make stuff up, are energy efficient to the point of being less wasteful than an internet search, and are trained on data obtained legally, call me.

Until then, they're immoral, unethical, and are going to destroy the entire internet and then the planet.

@johnzajac @TheLancashireman You don't even have to insist on 100% correctness; just on them being incorrect less often than an equivalently-trained human. (That, right there, is a high bar they can't reach yet, if ever.)
@cstross @johnzajac @TheLancashireman no worries, we will solve that by dumbing down humans, killing education and reducing everyone to the mental capabilities of a snail on coke to make LLMs look better.

@rfc1437 @cstross @johnzajac @TheLancashireman

(squints at Project 2025, & the US Republican party generally)

You say that like it's a hypothetical....

@cavyherd @rfc1437 @cstross @TheLancashireman

Of course, the problem with the entire US fash strategy is that there *is* a real world, and it's *not* the one they're in. By exiling people who know enough about the real world to be effective, they're basically guaranteeing their eventual retirement.

Because you can fake it some of the time, but not all of the time. Eventually, reality asserts its irresistible hegemony.

@johnzajac @rfc1437 @cstross @TheLancashireman

Colbert's "Reality's notable leftward bias," yep.

Γ—

@cstross

LLMs are the shitbrained version of the ML models that had been accelerating (and transforming entire research areas and industries for the better) until LLMs took up all the oxygen.

To your point, those ML models were built slowly and deliberately using vetted information, which is why they're truly miraculous.

Only this era's humanity would throw away actually revolutionary technology to invest in the image of revolutionary technology instead.

But it's not a pipe, never will be

@johnzajac @cstross

Eventually they'll realise that ceci n'est pas un profit.

@johnzajac @cstross

"... those ML models were built slowly and deliberately using vetted information, which is why they're truly miraculous."

There were many wrong turns along the way. A late colleague once gave me a spreadsheet of ML failures. Unfortunately I don't have it any longer, but two failures stuck in my mind ...
1/3

@johnzajac @cstross

2/3
The ML that was shown photographs of mushrooms and told which were poisonous. Unfortunately the data mostly alternated between poisonous and non-poisonous, so the machine learned that the odd-numbered mushrooms were poisonous.

@johnzajac @cstross

3/3 The ML was shown photographs of skin lesions and told which of them were cancerous. The machine learned that having a ruler in the photograph indicated a cancerous lesion.

@TheLancashireman @cstross

Wouldn't it have been crazy if, even though they knew the mushrooms are poisonous, they just shrugged and fed everyone those mushrooms anyway, like "this machine intelligence must know better! lol lmao"?

That's what they're doing with LLMs. As I type this, a healthcare company is implementing a hallucinating shit-tech into your medical records to "summarize" them for your doctor. I expect thousands of people without uteruses to be *shocked* that they're with child.

@TheLancashireman @cstross

When LLMs are 100% correct all the time, never make stuff up, are energy efficient to the point of being less wasteful than an internet search, and are trained on data obtained legally, call me.

Until then, they're immoral, unethical, and are going to destroy the entire internet and then the planet.

@johnzajac @TheLancashireman You don't even have to insist on 100% correctness; just on them being incorrect less often than an equivalently-trained human. (That, right there, is a high bar they can't reach yet, if ever.)

@cstross @TheLancashireman

With what I know about the relationship between human beings and machines, the impact of mediation on credibility, and the reliance of some people on technology to do hard work that will never be suited to technology (moral, ethical) I'd have to insist on 100% correctness.

@cstross @johnzajac @TheLancashireman no worries, we will solve that by dumbing down humans, killing education and reducing everyone to the mental capabilities of a snail on coke to make LLMs look better.

@rfc1437 @cstross @johnzajac @TheLancashireman

(squints at Project 2025, & the US Republican party generally)

You say that like it's a hypothetical....

@cavyherd @rfc1437 @cstross @TheLancashireman

Of course, the problem with the entire US fash strategy is that there *is* a real world, and it's *not* the one they're in. By exiling people who know enough about the real world to be effective, they're basically guaranteeing their eventual retirement.

Because you can fake it some of the time, but not all of the time. Eventually, reality asserts its irresistible hegemony.

@johnzajac @rfc1437 @cstross @TheLancashireman

Colbert's "Reality's notable leftward bias," yep.

@cstross @johnzajac @TheLancashireman Ultimately, it's about tech bros being lazy and hoping to do what Google did. Google found that a meh ranking with huge amounts of data was working better than excellent ranking and little data.

Now this worked because you needed the right link, picked by the human, to be in the top 10.

LLM are not like that: there's no human to do the intelligent bit that gets you there. But bros dream on.

@TheLancashireman @johnzajac I've heard similar anecdotes. NATO tried to train an image recognizer to distinguish NATO from Warsaw Pact tanks. But Soviet tanks were often photographed against pine forests. So in the end all they had was a tree recognizer.
@johnzajac @cstross I just finished reading Lem's Futurological Congress. And the plot of a completely destroyed and ruined world where humans feel as in an utopia of abundance thanks to an illusion induced by psychoactive stimulants had a je-ne-sais-quoi of LLM.
@nicopap or the matrix

@AnnieG There is a scene in the book where the main character is given the choice between taking the white pill or the black pill.

I went bonkers. But it was a different context. It was the main character's girlfriend letting him chose between a marriage pill and a separation pill.

There a different scene like the red/blue pill scene, but no pills are involved.

Mark Osborne's MORE

YouTube

@oblomov ah! I had to watch it twice, nice video. I get the link.

MORE touches a different vibe, like, it's "happiness that can be sold to you" and "society made so that there is nothing to do but put happiness in the box". In futurological congress, it's more "why organize society around production, when you can organize it around illusion".

But there is a lot in common.

@nicopap @cstross

Oh fascinating. I'll add that to my pile.

@johnzajac @cstross one thing I disagree with - humans have always been this dumb. We're just the first group who've had the opportunity to make this mistake so of course we're making it.

@mathw @cstross

I don't know. No offense, but isn't it kind of boring to just think everyone is dumb and shitty? Personally, I know a LOT of very very smart people who have revealed themselves to be total fools over the last 5-10 years. And some that aren't the sharpest tools in the shed who have real wisdom about these issues.

I just don't think it's that simple, tbh.

@johnzajac @cstross It's not that everyone or most people are dumb or awful. Most people are just trying to get along. But I look at history and it seems clear that as a species we are collectively excellent at making bad decisions.

@mathw @cstross

I can see why one would come to that conclusion, but I have a different theory: I think we're fundamentally social creatures who are genetically predisposed to be almost comically gullible.

Most of those "bad decisions" were actually driven by leadership, not "popular assent". Also, history is always going to describe past events in the context of current mores, and we live during a time of grotesque and ahistorical individualism and total dominion by a small number of people.

@johnzajac @cstross I like that way to look at it. A bit more optimistic than I've been for the last decade or so.

@mathw @cstross

As someone who has always been *extremely* gullible, esp with people I trust, I see the signs everywhere.

We just *want* to trust people. It's why it goes so badly on a society-wide scale when the ruling class so brutally betrays us that we feel the need to rise up.

I mean, the French ruling class literally *deserved* to have their heads cut off en masse for betraying the trust of the people.

I think we love dogs so much because they have the same gullibility/trust feature

@johnzajac @mathw

I think you would both enjoy Wild Democracy by Anne Norton, about the balance between courage and fear/gullibility and their respective impacts on struggling for freedom or submitting to authority.

https://academic.oup.com/book/45531?login=false

@johnzajac @cstross @mspro

Also, they tended to be specialized.

@bifouba @cstross @mspro

Because AGI can't exist with current computing paradigms.

No matter how big the array, microprocessor-based computing simply can't match what human cells do in microtubules.

@johnzajac @bifouba @mspro Ah, a Roger Penrose believer! (I'm very doubtful about he microtubule hypothesis. However, it's clear that computational neural networks rely on a grossly over-simplified model of the real thing.)

@cstross @bifouba @mspro

Well, evidence is mounting that the quantum effects within microtubules are both real and extremely unexpected, considering how hot and wet the environment is.

If nothing else, it throws a century of human biology orthodoxy on its head, and points us away from Western sciences's engineering-based ideas of discrete systems being coordinated toward the idea that there's in fact a single completely integrated and interrelated system that is much more complex.

@cstross @bifouba @mspro

Penrose's theory of quantum consciousness is interesting, if non-falsifiable (like all theories of subjective consciousness are).

But the fact that he predicted something like quantum microtubules to great ridicule decades ago is kind of astonishing, tbh.