LLMs are not intelligent but they ARE good for spouting extruded text that mimics the zeitgeist of their training data and if that training data includes the overflowing sewer that is the unfiltered internet you're going to get gaslighting, lies, conspiracy theories, and malice. Guardrails only prevent some of it from spilling over.

Humans base their norms on their peers' opinions, so LLMs potentially normalize all sorts of horrors in discourse—probably including ones we haven't glimpsed yet.

To be clear, that point about LLMs is a criticism of the training inputs. Which as far as I can see are promiscuously hoovered off the public internet by bots (like the ones perpetually DoS'ing the server my blog runs on this year) with zero thought given to curating the data for accuracy or appropriateness.

They generate perfect "answer shaped objects" … from a foul-minded misogynist white supremacist bigot oozing malice against everyone who's not like him.

@cstross 'answer-shaped objects' is going in my permanent anti-LLM toolkit!
@cstross I have yet to see a use case for LLMs that isn't already handled at least as well or better by other things that are less power-hungry and unethical.
@StarkRG @cstross
Every time I encounter artificial intelligence it displays an awful lot that is artificial but very little that is intelligent.
@Spoon
There's a reason I refused to use the term "AI", and only refer to LLMs when discussing this nonsense.
@StarkRG @cstross

@markotway @Spoon @cstross The term LLM only describes one specific type, language models (chatbots), and doesn't include the generative "AI" things that produce images or upscale videos, for example. I think the general term that encompasses everything marketed as "AI" is Transformer (or, more precisely, transformer deep learning architecture). All generative AI is bad, but LLMs are pretty much the worst because they don't have any real purpose.

https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)

Transformer (deep learning architecture) - Wikipedia

@StarkRG
Yeah. I know the differences. I tend to refer to generative AI as "a load of old bollocks". 😁
@Spoon @cstross
@markotway @Spoon @cstross
They are definitely up there with tetraethyllead and chlorofluorocarbons in the pantheon of worst inventions ever. Though, unlike the other two, I don't think Thomas Midgley Jr. has played any part in the development or popularization of deep learning transformers.

@StarkRG @cstross
Also, and very comforting, no-one has yet made any money out of this.

The music has been playing, sometimes clamorous and at times quite softly.

The Tech Bros have all joined in with fervour but it's pretty clear that when the music stops there will be very, very few chairs left to sit upon.

AI is a solution looking for a problem to solve.

@Spoon @StarkRG Disagree, conditionally: generative AI is almost 100% hot steaming garbage, but analytical AI (aka "big data") is quietly revolutionizing some fields, eg. face recognition via networked public CCTV cameras. She we didn't ask for that!

@cstross @StarkRG

I have demonstrated my ignorance but I still maintain that generative AI is a solution looking for a problem to misunderstand.

We should have cancelled the lot when HAL refused to open the hatch.

@Spoon @cstross They're good at finding possible patterns/matches in large datasets (possible, not definite, you've still got to actually check to make sure), but it definitely isn't something that the vast majority of consumers would ever have need for. It's like those enormous dump trucks mines use to move around the raw rubble before it gets turned into something useful, they are absolutely perfect for mines but would be wasteful and an ill fit to get drive-thru.

@StarkRG @cstross

What a good post.

You have cleared this up very well for me and I think I understand why the day to day manifestations are so irritating.

I taught on many levels for more than 40 years and plagiarism leaps off the page at me and slaps me across the face.

Any teacher who cannot see AI in student work needs to take a good look at their own knowledge base.

@cstross @Spoon @StarkRG and indeed you have personally written entire novels involving it going horribly wrong (poor, poor Headingley)

@cstross Bad enough that they hoover an entire site, but what I find incomprehensible is the way that they go back and hoover it again, and again, and again....

Are they expecting the entire site content to change in the intervening few minutes? Or maybe they're just treating it as if it was local storage (and leaving you with the bandwidth bill).

@cstross If it's no bother, I'm going to steal "answer-shaped objects" ... really nice turn of phrase.
@cstross This was already true of the earlier AI thingies using Bayesian inference for 'smart policing' trained on the practices of existing racialized policing. Guess how that went.
@cstross Hoovered off the internet, yes, but that includes pirate book sites. This was explicitly admitted in the case of Meta and there were several of my books in the mix. I expect a number of yours were too.
Might improve the quality of the training data, but it's theft.

@cstross @delawen The devil is in the details, specially as progress is incremental. Tons of data is being poured into them, even from all kinds of private sources. And reinforcement learning was how chatgpt avoided being just another edgy hate-speech bot that nobody wants to use.

There's so much money in this space right now that, for every criticism raised, there's work already being done to overcome it.

@t_var_s @cstross There are things you can't fix, like hallucinations. That's a feature of the technology, not a bug.
@delawen @cstross True, it's how it all works. One big lucid dream attempt.

@cstross

LLMs are the shitbrained version of the ML models that had been accelerating (and transforming entire research areas and industries for the better) until LLMs took up all the oxygen.

To your point, those ML models were built slowly and deliberately using vetted information, which is why they're truly miraculous.

Only this era's humanity would throw away actually revolutionary technology to invest in the image of revolutionary technology instead.

But it's not a pipe, never will be

@johnzajac @cstross

Eventually they'll realise that ceci n'est pas un profit.

@johnzajac @cstross

"... those ML models were built slowly and deliberately using vetted information, which is why they're truly miraculous."

There were many wrong turns along the way. A late colleague once gave me a spreadsheet of ML failures. Unfortunately I don't have it any longer, but two failures stuck in my mind ...
1/3

@johnzajac @cstross

2/3
The ML that was shown photographs of mushrooms and told which were poisonous. Unfortunately the data mostly alternated between poisonous and non-poisonous, so the machine learned that the odd-numbered mushrooms were poisonous.

@johnzajac @cstross

3/3 The ML was shown photographs of skin lesions and told which of them were cancerous. The machine learned that having a ruler in the photograph indicated a cancerous lesion.

@TheLancashireman @johnzajac I've heard similar anecdotes. NATO tried to train an image recognizer to distinguish NATO from Warsaw Pact tanks. But Soviet tanks were often photographed against pine forests. So in the end all they had was a tree recognizer.
@johnzajac @cstross I just finished reading Lem's Futurological Congress. And the plot of a completely destroyed and ruined world where humans feel as in an utopia of abundance thanks to an illusion induced by psychoactive stimulants had a je-ne-sais-quoi of LLM.
@nicopap or the matrix

@AnnieG There is a scene in the book where the main character is given the choice between taking the white pill or the black pill.

I went bonkers. But it was a different context. It was the main character's girlfriend letting him chose between a marriage pill and a separation pill.

There a different scene like the red/blue pill scene, but no pills are involved.

Mark Osborne's MORE

YouTube

@oblomov ah! I had to watch it twice, nice video. I get the link.

MORE touches a different vibe, like, it's "happiness that can be sold to you" and "society made so that there is nothing to do but put happiness in the box". In futurological congress, it's more "why organize society around production, when you can organize it around illusion".

But there is a lot in common.

@johnzajac @cstross one thing I disagree with - humans have always been this dumb. We're just the first group who've had the opportunity to make this mistake so of course we're making it.
@cstross American technocracy automated shortsighted blowhardism? Conway’s law for cognitive biases?

@cstross

"Answer shaped objects" is the perfect description. Filing away for the next time my boss outputs directives based on "AI" search results....

@cstross

The point about training data is excellent.

I'm waiting for someone to "do it right" - train on a carefully curated selection of vetted public domain or legally acquired texts. But - that's hard to do, so probably a pipe dream.

@tbortels @cstross

Honestly I thought that was the point of Quora. Instead they started monetizing people asking questions, which became bots asking questions, and the whole place went downhill quickly.

@tbortels A sane business model for an AI startup would be to collect and curate (and legally license) *GOOD* training data corpuses. Alas, Gresham's Law applies in the AI sector right now.

@cstross

1/2

The point is that LLM do not think, they have no mind or logic of their own. They just reflect and digest the information they are provided with.

Nor they have an ethical framework or a conscience, and that is what missing here. With humans, we call that an education, taking a considerable amount of your youth and early adulthood.

#AI #psycosis #schizofrenia

@cstross

2/2

For an LLM I would expect an operating model with an accumulated PhD level at psychology, sociology, economy and exact sciences that with arbiter the bullshit it digests before dumping it in the real world.

We do not need psychotic AI nonsense...
And there is a lot of weird, wrong. and sick material out there that is simply dumped on the internet.

#AI #madness