I completely understand the position of people who don't want to use LLMs or consume any content produced with LLMs. I do not understand the position of "NO ONE should use LLMs at all" because how are you planning to make that happen? no one should be *forced* to use them, but plenty of people are using them now. it's not something you can wish away or achieve via moral condemnation.
@lzg my issue is, even if you feel that way… what’s the plan? This is the stance that failed with social media, failed with ride sharing apps, failed with crypto. Even if critics were morally right to say “nobody should ever use this”, they didn’t succeed in harm reduction. And that has to matter more than smugly being “right” when the stakes are this high.
@anildash @lzg what if my concern is in fact that many people are being *forced* to use them? and many sectors of society, such as education, are having these tools forced on them as well, with measurably bad impacts to those least able to bear them or resist them
@autonomousapps @anildash of course, but that's also true of surveillance tech. it's a concern about labor, education, justice. it's not necessarily about LLMs. I am interested in that better world where we mitigate the harms of these tools existing, instead of trying to hate them out of existence.

@lzg @anildash I hate them *because* the entirety of the capitalist class is determined to force them on society unilaterally (in a manner of speaking) and undemocratically.

Contrary to your point, and leaving aside the minority on mastodon, most of my professional contacts are all-in on these things *despite* all the harms; in fact I think they're willfully ignorant of the harms, for the most part.

What's your theory of change vis-a-vis harm reduction? What should we be doing?

@autonomousapps @anildash I think first we should have better arguments, and that requires keeping up with the outdated or weak ones (stochastic parrots, dubious environmental numbers, copyright based claims). IMO this is necessary for proposing regulation that makes *some* kind of sense, including achieving some price to consumers more in line with the cost of use. Just generally I think would love to broaden this conversation to more extractive tech, not only LLMs.
@lzg
Without those "outdated or weak arguments", in what way is using LLMs immoral?
The environmental one was the strongest I could think of. How is it outdated or weak?
"Stochastic parrot" is an argument about effectiveness, not morality.
And copyright is illegitimate anyway. If AI manages to do away with it entirely, that would be a win.
@autonomousapps @anildash

@light @lzg @autonomousapps @[email protected] false, of course, because there's an issue of fraudulent advertising: an unreasoning device that merely assembles bits of text stochastically in unthinking mimicry of the FORMS of meaningful speech, and is incapable of distinguishing between truthful and false information (and indeed, is clearly intended NOT to make any such distinction because such ability to discriminate between true and false information would interfere with intended use-cases for LLMs) is not actually intelligent, yet it's fraudulently marketed as intelligent.

The "stochastic parrot" is not merely lacking in efficacy (which is, I suppose, the only real concern a typically ethics-free computer geek would have) but is lacking in the very quality which the LLM vendors are claiming make it immediately necessary for society to shower them with money and "data centers" and legislative favors. "Parroting" is an unintelligent activity, @light@noc, not merely an ineffective one. Those who call out the stochastic parrot are pointing out that the promise of advanced LLM super-intelligence is a fraudulent one. This is about truth winning out over falsity.

Now, do us all a favor and go stick your head in a pig, "Light".

@mxchara
Go fuck yourself, shitwad