LLMs are not intelligent but they ARE good for spouting extruded text that mimics the zeitgeist of their training data and if that training data includes the overflowing sewer that is the unfiltered internet you're going to get gaslighting, lies, conspiracy theories, and malice. Guardrails only prevent some of it from spilling over.

Humans base their norms on their peers' opinions, so LLMs potentially normalize all sorts of horrors in discourse—probably including ones we haven't glimpsed yet.

To be clear, that point about LLMs is a criticism of the training inputs. Which as far as I can see are promiscuously hoovered off the public internet by bots (like the ones perpetually DoS'ing the server my blog runs on this year) with zero thought given to curating the data for accuracy or appropriateness.

They generate perfect "answer shaped objects" … from a foul-minded misogynist white supremacist bigot oozing malice against everyone who's not like him.

@cstross

1/2

The point is that LLM do not think, they have no mind or logic of their own. They just reflect and digest the information they are provided with.

Nor they have an ethical framework or a conscience, and that is what missing here. With humans, we call that an education, taking a considerable amount of your youth and early adulthood.

#AI #psycosis #schizofrenia

@cstross

2/2

For an LLM I would expect an operating model with an accumulated PhD level at psychology, sociology, economy and exact sciences that with arbiter the bullshit it digests before dumping it in the real world.

We do not need psychotic AI nonsense...
And there is a lot of weird, wrong. and sick material out there that is simply dumped on the internet.

#AI #madness