Considering that LLMs started off devolving within hours into Nazi rhetoric so vile they had to be taken offline, I can't say that I see this sort of overcorrection as the most horrible thing that could have happened. Also, a reminder that LLM aren't in fact intelligent, artificially or otherwise. They output what they're programmed to.

https://www.pcmag.com/news/google-explains-what-went-wrong-with-geminis-image-generation

@scalzi The biggest - and I do mean biggest - issue with LLM is that they lack the *soul* of a human being, with all of the good -and bad - things that comes with.

That, and a complete lack of contextual understanding, which for whatever reason we meat sacks are often pretty good at - means that LLMs can identify words, phrases, and images with astounding accuracy - but that’s it.

They can't go beyond that without, I'd argue, independent thought and consciousness (which humans often use as a mental backstop for dumb shit)

@Aminorjourney @scalzi its factually not true!