I completely understand the position of people who don't want to use LLMs or consume any content produced with LLMs. I do not understand the position of "NO ONE should use LLMs at all" because how are you planning to make that happen? no one should be *forced* to use them, but plenty of people are using them now. it's not something you can wish away or achieve via moral condemnation.
@lzg my issue is, even if you feel that way… what’s the plan? This is the stance that failed with social media, failed with ride sharing apps, failed with crypto. Even if critics were morally right to say “nobody should ever use this”, they didn’t succeed in harm reduction. And that has to matter more than smugly being “right” when the stakes are this high.
@anildash @lzg what if my concern is in fact that many people are being *forced* to use them? and many sectors of society, such as education, are having these tools forced on them as well, with measurably bad impacts to those least able to bear them or resist them
@autonomousapps @anildash of course, but that's also true of surveillance tech. it's a concern about labor, education, justice. it's not necessarily about LLMs. I am interested in that better world where we mitigate the harms of these tools existing, instead of trying to hate them out of existence.

@lzg @anildash I hate them *because* the entirety of the capitalist class is determined to force them on society unilaterally (in a manner of speaking) and undemocratically.

Contrary to your point, and leaving aside the minority on mastodon, most of my professional contacts are all-in on these things *despite* all the harms; in fact I think they're willfully ignorant of the harms, for the most part.

What's your theory of change vis-a-vis harm reduction? What should we be doing?

@autonomousapps @anildash I think first we should have better arguments, and that requires keeping up with the outdated or weak ones (stochastic parrots, dubious environmental numbers, copyright based claims). IMO this is necessary for proposing regulation that makes *some* kind of sense, including achieving some price to consumers more in line with the cost of use. Just generally I think would love to broaden this conversation to more extractive tech, not only LLMs.

@lzg @autonomousapps @anildash "stochastic parrot" is an apt summary of how the devices work and thus a reminder that LLMs aren't actually intelligent. what's "outdated" about the phrase? or what about the phrase "doesn't work"?

and why on Earth would you be trusting to government regulations to address the problem at a time when fascist Republicans control the government and Democrats mostly collaborate with Republicans?.

@lzg @autonomousapps this @anildash dork burbles about having definite plans, but he's burbling in bad faith: the truth is that Mr. Dash-Dot-Com doesn't regard the fraudulent marketing of LLMs as a serious problem. I would guess that's where most #tech professionals stand: they've been willing already, for decades, to live with and prosper from a corporate technology sector that's normalized fraud and raising money via false promises. So, lacking any plans himself (beyond preserving a status quo in which he's comfortably placed) he mocks those who have a principled stance against fraudulent technology.

Why? Where can any action possibly start, except with principled opposition? The people who insist the most loudly that opponents must have carefully worked-out plans before they can be taken seriously are bad-faith actors like @anildash and other people who don't actually care that much whether LLMs are destructive and fraudulent.

@lzg @autonomousapps @anildash Why does it make ANY sense, especially in an Internet conversation—the rough equivalent of having an argument over dinner, perhaps—to be demanding PLANS from people?

What's actually wrong with good plain hatred of liars? The #technology sector is saturated with deceit. It's been like that long before the #LLM came into being as an actualization of their bad-faith approach to communication in general. It's a stupid device that can't tell truth from lies, and that's basically how the average tech exec or investor wants to live their whole life. Is there an actual PROBLEM with hating such a thing?

@lzg @autonomousapps The stakes are too high says @anildash and I agree—MUCH too high for anyone to endure his patent waffling and double-dealing.