LLMs are not intelligent but they ARE good for spouting extruded text that mimics the zeitgeist of their training data and if that training data includes the overflowing sewer that is the unfiltered internet you're going to get gaslighting, lies, conspiracy theories, and malice. Guardrails only prevent some of it from spilling over.

Humans base their norms on their peers' opinions, so LLMs potentially normalize all sorts of horrors in discourse—probably including ones we haven't glimpsed yet.

To be clear, that point about LLMs is a criticism of the training inputs. Which as far as I can see are promiscuously hoovered off the public internet by bots (like the ones perpetually DoS'ing the server my blog runs on this year) with zero thought given to curating the data for accuracy or appropriateness.

They generate perfect "answer shaped objects" … from a foul-minded misogynist white supremacist bigot oozing malice against everyone who's not like him.

@cstross

The point about training data is excellent.

I'm waiting for someone to "do it right" - train on a carefully curated selection of vetted public domain or legally acquired texts. But - that's hard to do, so probably a pipe dream.

@tbortels @cstross

Honestly I thought that was the point of Quora. Instead they started monetizing people asking questions, which became bots asking questions, and the whole place went downhill quickly.