@Gargron would you know if you've seen a good outcome of an LLM? You'd somehow be able to identify when the LLM got it right?
I assure you you've experienced good LLM output and don't even know it. Because that's what good LLM output looks like. Indistinguishable from human output.
Your examples are perhaps false equivalencies. Take asbestos. We didn't abolish insulation. We developed better, safer insulation. We didn't stop dying food colors, we just developed safer dyes etc.
@Tekchip @Gargron the tiny potential for very rare good outcomes are not worth the constant poisoning of humanity's collective information corpus.
For every "good" generated content there are dozens of thousands of terrible slop that are difficult to separate from genuine useful information or material when doing research or code reviews, etc.
Not to mention that these "good" outcomes are much costlier to humanity than creating by hand, with no benefit.
@Kiloku @Gargron the problem is you want to assume they are rare outcomes. I don't believe they are. Unfortunately that's where we're at an impasse. It's literally impossible to measure the good outcomes.
I agree the environmental outcome is terrible. I don't like that part. What we can look forward to is the technology improving. General computers used to use WAY more power than they do now. The same is going to happen with LLM technology. Hopefully sooner than later. Folks are working on it.