Machine translations are often brought up as a gotcha whenever I criticize LLMs. It's worth pointing out two things: Machine translations existed decades before LLMs, and yes, machine translations are useful. However: I would never in my life read a machine translated book. Understanding what a social media post is talking about in rough terms? Sure. Literature? Absolutely not. Hell, have you ever seen machine translated subtitles? It's absolute garbage.
I have the impression that primarily anglophone people don't read as much translated literature, because so much good literature already exists in their language, so this issue may not be as familiar within that demographic. As someone who did not grow up anglophone, I can tell you there is a world of difference between a good and a bad translation even when done by humans. Machine translations are not even on the scale.
From what I've observed, people who claim that LLMs can replace artists don't understand art, people who claim that they can replace musicians don't understand music, people who claim that they can replace writers don't understand literature, and people who claim they can replace translators don't rely on translations. If I had a button that would erase LLMs from the world but it would take machine translations away (which is a false dichotomy anyway), I would absolutely still press it.
Technology is not inevitable. We've decided not to have asbestos in our walls, lead in our pipes, or carginogenic chemicals in our food. (If you're going to argue that it's not everywhere, where would you rather live?) We could just not do LLMs. It's allowed.

@Gargron would you know if you've seen a good outcome of an LLM? You'd somehow be able to identify when the LLM got it right?

I assure you you've experienced good LLM output and don't even know it. Because that's what good LLM output looks like. Indistinguishable from human output.

Your examples are perhaps false equivalencies. Take asbestos. We didn't abolish insulation. We developed better, safer insulation. We didn't stop dying food colors, we just developed safer dyes etc.

@Tekchip @Gargron the tiny potential for very rare good outcomes are not worth the constant poisoning of humanity's collective information corpus.

For every "good" generated content there are dozens of thousands of terrible slop that are difficult to separate from genuine useful information or material when doing research or code reviews, etc.

Not to mention that these "good" outcomes are much costlier to humanity than creating by hand, with no benefit.

@Kiloku @Gargron the problem is you want to assume they are rare outcomes. I don't believe they are. Unfortunately that's where we're at an impasse. It's literally impossible to measure the good outcomes.

I agree the environmental outcome is terrible. I don't like that part. What we can look forward to is the technology improving. General computers used to use WAY more power than they do now. The same is going to happen with LLM technology. Hopefully sooner than later. Folks are working on it.

@Tekchip @Gargron I *know* they are rare.
@Kiloku @Gargron Please let the rest of us know how to tell when we've seen a good LLM output. Seriously, if we can all tell the good and the bad then we can start gathering some data to have an even more rational conversation.