I just recently realized that what I truly hate about LLMs is that it devalues language. I love language, I love using it very intentionally, I love how different people wield and work language differently. A well forged phrase can cut right to the soul. Language is literally magic. It can do things where man and machine all fail.

But now with the press of a button you can get sugary pink language goo in any shape you like. And this is sold as an equal replacement to real human language. The insult! The depravity!

I think it might say something about how far language is already devalued. We live in a morass of content marketing and business process documentation and terms and conditions and propaganda and spam. All soulless language that nobody asks for but that people are compelled to create. We can't imagine not creating such language goo. And so we're grateful for the pink goo machine.

You know those stories about how there was once magic in the world but it was lost? This is it. This is how it happens.

@plexus I think I have an issue--unless someone can convince me otherwise--that these "Large Language Models" aren't really "models" at all. "Model" implies some kind of simplified facsimile, a stripped down or scaled version of something. But the machine learning used to create them basically guarantees that their inner workings can never be interrogated, much less used to understand anything about Language itself. They are just more advanced chat bots.
@potpie @plexus
If you have a sentence, an LLM can assign a "probability of coming next" to each word in its vocabulary. For someone interested in the statistics of language, that is a *kind* of model...