I completely understand the position of people who don't want to use LLMs or consume any content produced with LLMs. I do not understand the position of "NO ONE should use LLMs at all" because how are you planning to make that happen? no one should be *forced* to use them, but plenty of people are using them now. it's not something you can wish away or achieve via moral condemnation.
@lzg my issue is, even if you feel that way… what’s the plan? This is the stance that failed with social media, failed with ride sharing apps, failed with crypto. Even if critics were morally right to say “nobody should ever use this”, they didn’t succeed in harm reduction. And that has to matter more than smugly being “right” when the stakes are this high.
@anildash @lzg what if my concern is in fact that many people are being *forced* to use them? and many sectors of society, such as education, are having these tools forced on them as well, with measurably bad impacts to those least able to bear them or resist them
@autonomousapps @anildash of course, but that's also true of surveillance tech. it's a concern about labor, education, justice. it's not necessarily about LLMs. I am interested in that better world where we mitigate the harms of these tools existing, instead of trying to hate them out of existence.

@lzg @anildash I hate them *because* the entirety of the capitalist class is determined to force them on society unilaterally (in a manner of speaking) and undemocratically.

Contrary to your point, and leaving aside the minority on mastodon, most of my professional contacts are all-in on these things *despite* all the harms; in fact I think they're willfully ignorant of the harms, for the most part.

What's your theory of change vis-a-vis harm reduction? What should we be doing?

@autonomousapps @anildash I think first we should have better arguments, and that requires keeping up with the outdated or weak ones (stochastic parrots, dubious environmental numbers, copyright based claims). IMO this is necessary for proposing regulation that makes *some* kind of sense, including achieving some price to consumers more in line with the cost of use. Just generally I think would love to broaden this conversation to more extractive tech, not only LLMs.
@autonomousapps @anildash my frustration is that, in many leftist environments, the attitude has been "I hate AI and I refuse to believe anyone could find valid use cases and I refuse to learn any more than I already know". this limits how much we can even talk about it.
@lzg @anildash I would also love to broaden the conversation to extractive tech, basically all of Silicon Valley. Too much power, no consequences for bad behavior. eg, ride "sharing" apps: completely illegal at the start, drove numerous taxi drivers to suicide, and this ultimately hasn't mattered because we don't actually live in a democratic society. LLMs are yet another example of this. I would barely care about them except it's clear the CEO class sees them as a way to rid itself of workers
@lzg @anildash in case it lends me any credibility, I recently lost my job when my company axed 40% of its workforce; they cited ai productivity as the reason. I'm now doing consulting and one of my contracts is writing a "skill" for "agents." I can see a value in writing a ~tool to solve a problem for a large class of dev. But the failure mode is so hilariously bad. The things lie all the time. And they're grossly sycophantic. I'm doing it bc I could have said no. Having agency is important

@lzg @autonomousapps @anildash As someone who is occasionally forced to review both text and code slop, sure, LLMs are useful to those who prompt them to get out of doing the work themselves, but they generate way more work than would be necessary in the first place for those who actually do work.

For example: There are unit tests, they pass, the coverage is good, but the tests assert that the business logic is wrong in the exact way in which it is wrong!

#noAI #AI #LLM #LLMs #vibeCoding #slop #genAI

@lzg @autonomousapps @anildash

I see the opposite. Studying neuro/psych around at the start of ANNs, being around big computers and stats and data since, I see a refusal on the part of LLM thrusters to acknowledge that we already knew all of this would trend to homogeneous mush.

🤷🏻‍♂️

@lzg
Without those "outdated or weak arguments", in what way is using LLMs immoral?
The environmental one was the strongest I could think of. How is it outdated or weak?
"Stochastic parrot" is an argument about effectiveness, not morality.
And copyright is illegitimate anyway. If AI manages to do away with it entirely, that would be a win.
@autonomousapps @anildash
@light @lzg @autonomousapps @anildash I would say that using any corporate technology product is immoral on general principle: the entire industry is abetting global fascism, and #LLMs are useful chiefly because they're good at regurgitation—i.e. they're good at repeating propaganda.
@light @lzg @autonomousapps @anildash oh I think I recognize you, you're a sophistical shithead of some variety

@light @lzg @autonomousapps @[email protected] false, of course, because there's an issue of fraudulent advertising: an unreasoning device that merely assembles bits of text stochastically in unthinking mimicry of the FORMS of meaningful speech, and is incapable of distinguishing between truthful and false information (and indeed, is clearly intended NOT to make any such distinction because such ability to discriminate between true and false information would interfere with intended use-cases for LLMs) is not actually intelligent, yet it's fraudulently marketed as intelligent.

The "stochastic parrot" is not merely lacking in efficacy (which is, I suppose, the only real concern a typically ethics-free computer geek would have) but is lacking in the very quality which the LLM vendors are claiming make it immediately necessary for society to shower them with money and "data centers" and legislative favors. "Parroting" is an unintelligent activity, @light@noc, not merely an ineffective one. Those who call out the stochastic parrot are pointing out that the promise of advanced LLM super-intelligence is a fraudulent one. This is about truth winning out over falsity.

Now, do us all a favor and go stick your head in a pig, "Light".

@mxchara
Go fuck yourself, shitwad

@lzg @anildash I think there's another point that got lost in this. The various AI businesses are all cash incinerators, far in excess of Uber when it began. Their ultimate success rests on a narrative of inevitability which is far from certain. As such, I believe that ridicule is praxis. While we can't simply hate generative AI out of existence, perhaps we can mock it until it shrivels up like the limp little micro dick it is

whatever remains afterwards may be the actually-useful stuff

@lzg @autonomousapps @anildash "stochastic parrot" is an apt summary of how the devices work and thus a reminder that LLMs aren't actually intelligent. what's "outdated" about the phrase? or what about the phrase "doesn't work"?

and why on Earth would you be trusting to government regulations to address the problem at a time when fascist Republicans control the government and Democrats mostly collaborate with Republicans?.

@lzg @autonomousapps this @anildash dork burbles about having definite plans, but he's burbling in bad faith: the truth is that Mr. Dash-Dot-Com doesn't regard the fraudulent marketing of LLMs as a serious problem. I would guess that's where most #tech professionals stand: they've been willing already, for decades, to live with and prosper from a corporate technology sector that's normalized fraud and raising money via false promises. So, lacking any plans himself (beyond preserving a status quo in which he's comfortably placed) he mocks those who have a principled stance against fraudulent technology.

Why? Where can any action possibly start, except with principled opposition? The people who insist the most loudly that opponents must have carefully worked-out plans before they can be taken seriously are bad-faith actors like @anildash and other people who don't actually care that much whether LLMs are destructive and fraudulent.

@lzg @autonomousapps @anildash Why does it make ANY sense, especially in an Internet conversation—the rough equivalent of having an argument over dinner, perhaps—to be demanding PLANS from people?

What's actually wrong with good plain hatred of liars? The #technology sector is saturated with deceit. It's been like that long before the #LLM came into being as an actualization of their bad-faith approach to communication in general. It's a stupid device that can't tell truth from lies, and that's basically how the average tech exec or investor wants to live their whole life. Is there an actual PROBLEM with hating such a thing?

@lzg @autonomousapps The stakes are too high says @anildash and I agree—MUCH too high for anyone to endure his patent waffling and double-dealing.

@alyx_woodward @lzg @anildash since "stochastic parrots" keeps coming up, maybe it's worth checking in on one of the originators of the term (a scientist, not just some random Internet commentator!)

also ftr I believe the OG people in this thread at least are all operating in good faith, even though there are differences of opinion. there's no need for vitriol here

https://dair-community.social/@emilymbender/116304103647077081

Prof. Emily M. Bender(she/her) (@[email protected])

The first is a student who asked how to resist pressure to use "AI" without being a stick in the mud. I said: Be a stick in the mud! Help create the solid ground that others might stand on too. I shared this story with Sam Cole on the @[email protected] podcast, too: https://www.youtube.com/watch?v=UwBZiuH-1QY

Distributed AI Research Community