LLMs are not intelligent but they ARE good for spouting extruded text that mimics the zeitgeist of their training data and if that training data includes the overflowing sewer that is the unfiltered internet you're going to get gaslighting, lies, conspiracy theories, and malice. Guardrails only prevent some of it from spilling over.

Humans base their norms on their peers' opinions, so LLMs potentially normalize all sorts of horrors in discourse—probably including ones we haven't glimpsed yet.

To be clear, that point about LLMs is a criticism of the training inputs. Which as far as I can see are promiscuously hoovered off the public internet by bots (like the ones perpetually DoS'ing the server my blog runs on this year) with zero thought given to curating the data for accuracy or appropriateness.

They generate perfect "answer shaped objects" … from a foul-minded misogynist white supremacist bigot oozing malice against everyone who's not like him.

@cstross @delawen The devil is in the details, specially as progress is incremental. Tons of data is being poured into them, even from all kinds of private sources. And reinforcement learning was how chatgpt avoided being just another edgy hate-speech bot that nobody wants to use.

There's so much money in this space right now that, for every criticism raised, there's work already being done to overcome it.

@t_var_s @cstross There are things you can't fix, like hallucinations. That's a feature of the technology, not a bug.
@delawen @cstross True, it's how it all works. One big lucid dream attempt.