I completely understand the position of people who don't want to use LLMs or consume any content produced with LLMs. I do not understand the position of "NO ONE should use LLMs at all" because how are you planning to make that happen? no one should be *forced* to use them, but plenty of people are using them now. it's not something you can wish away or achieve via moral condemnation.
@lzg my issue is, even if you feel that way… what’s the plan? This is the stance that failed with social media, failed with ride sharing apps, failed with crypto. Even if critics were morally right to say “nobody should ever use this”, they didn’t succeed in harm reduction. And that has to matter more than smugly being “right” when the stakes are this high.
@anildash @lzg what if my concern is in fact that many people are being *forced* to use them? and many sectors of society, such as education, are having these tools forced on them as well, with measurably bad impacts to those least able to bear them or resist them
@autonomousapps @anildash of course, but that's also true of surveillance tech. it's a concern about labor, education, justice. it's not necessarily about LLMs. I am interested in that better world where we mitigate the harms of these tools existing, instead of trying to hate them out of existence.

@lzg @anildash I hate them *because* the entirety of the capitalist class is determined to force them on society unilaterally (in a manner of speaking) and undemocratically.

Contrary to your point, and leaving aside the minority on mastodon, most of my professional contacts are all-in on these things *despite* all the harms; in fact I think they're willfully ignorant of the harms, for the most part.

What's your theory of change vis-a-vis harm reduction? What should we be doing?

@autonomousapps @anildash I think first we should have better arguments, and that requires keeping up with the outdated or weak ones (stochastic parrots, dubious environmental numbers, copyright based claims). IMO this is necessary for proposing regulation that makes *some* kind of sense, including achieving some price to consumers more in line with the cost of use. Just generally I think would love to broaden this conversation to more extractive tech, not only LLMs.
@autonomousapps @anildash my frustration is that, in many leftist environments, the attitude has been "I hate AI and I refuse to believe anyone could find valid use cases and I refuse to learn any more than I already know". this limits how much we can even talk about it.
@lzg @anildash I would also love to broaden the conversation to extractive tech, basically all of Silicon Valley. Too much power, no consequences for bad behavior. eg, ride "sharing" apps: completely illegal at the start, drove numerous taxi drivers to suicide, and this ultimately hasn't mattered because we don't actually live in a democratic society. LLMs are yet another example of this. I would barely care about them except it's clear the CEO class sees them as a way to rid itself of workers
@lzg @anildash in case it lends me any credibility, I recently lost my job when my company axed 40% of its workforce; they cited ai productivity as the reason. I'm now doing consulting and one of my contracts is writing a "skill" for "agents." I can see a value in writing a ~tool to solve a problem for a large class of dev. But the failure mode is so hilariously bad. The things lie all the time. And they're grossly sycophantic. I'm doing it bc I could have said no. Having agency is important