I completely understand the position of people who don't want to use LLMs or consume any content produced with LLMs. I do not understand the position of "NO ONE should use LLMs at all" because how are you planning to make that happen? no one should be *forced* to use them, but plenty of people are using them now. it's not something you can wish away or achieve via moral condemnation.
@lzg my issue is, even if you feel that way… what’s the plan? This is the stance that failed with social media, failed with ride sharing apps, failed with crypto. Even if critics were morally right to say “nobody should ever use this”, they didn’t succeed in harm reduction. And that has to matter more than smugly being “right” when the stakes are this high.
@anildash @lzg so what does harm reduction look like? What's a needle exchange or methadone for LLMs?
@emma @lzg closest thing I've seen to this is academic work on "fingerprinting" generated text. It's not an intractable problem, but it does require 'literally any will' on the part of the model vendors and is therefore a total nonstarter unless it can be used to abuse/track public citizens or whatever.

RE: https://zirk.us/@MidniteMikeWrites/115934824982363119

@SnoopJ @emma @lzg LLMs have a lot of problems. Some of these aren't technical, though, like the copyright thing is literally a political social issue.

IMO the main user harm of LLMs is that they are inherently an unscoped technology that creators are pushing as everything machines, to the detriment of users. I'm confident the costs + bubble will mitigate some of this, but we need to work with comptuer scientists and psychologists to regulate this.