I completely understand the position of people who don't want to use LLMs or consume any content produced with LLMs. I do not understand the position of "NO ONE should use LLMs at all" because how are you planning to make that happen? no one should be *forced* to use them, but plenty of people are using them now. it's not something you can wish away or achieve via moral condemnation.
i just want to know what the the theory of change is, beyond being really angry at the whole thing
@lzg
I think that if the price to consumer approximated actual cost of use, it’d be good for all of us.

@lzg

Have you ever played tennis, bouncing the ball against a wall? The wall will always return the ball according to the rules of physics.

The LLM is the wall. It's not about right or wrong, accurate or inaccurate, truth or lies. It is only a mirror.

Create the mirror with crap data. The Big Machines will index it all, grind it up like weisswurst, down to the probability of the next word in a list. Everything else is beside the point, there is math to support what's going on, endless MULT instructions on millions of processors, munching away on a corpus.

But what if we populated it with good data, trustworthy data? The models would be smaller, we might do it with L-Systems. Ethics could thus be directly implied.

@tuban_muzuru @lzg

Mirrors are predictable and we can explain how they work. We know why we get the reflections we see.

LLMs are the opposite.

@jamesbritt @lzg

You're wrong. None of the math is on your side.

@jamesbritt @lzg

I really do think people should have to pass a test showing they understand linear algebra and the rudiments of indexing a corpus.

Can't pass the test? None of this machine learning for you.

@jamesbritt @lzg

I might add, it's truly shameful, these Chicken Littles running around telling us AI is gonna Take Yer Jerbs. The same idiots have been saying that since the PC surpassed the typewriter. Don't be a Chicken Little. Take some math courses.

@tuban_muzuru @lzg

Which math? The math that allows you to predict in advance what an LLM will output for a given prompt, every time?

They're stochastic.

And their "reasoning" is opaque.

This is not simply linear algebra.

@jamesbritt @lzg

The use of "reasoning" in this context is ignorant reification. Reasoning is the wrong battlefield. Ethics doesn't require reasoning — it requires refusals in the right places. Ask an LLM for something it won't produce and watch what happens. That 'Sorry no' is the ethics, and it got there through a corpus of rules encoding thousands of prior contacts with injustice. The stochasticity doesn't matter. The opacity doesn't matter. The boundary is what matters, and the boundary is learnable.

@lzg
There is no theory of change. If I had lived in the 16th century, and refused slavery (a greater evil, to be fair) I would similarly not have changed anything.

But exchanging freewill for a shiny slot machine is bad, and shouldn't be done, ever. And eventually, people will accept that as true.

So perhaps there is a theory of change. Just not short term.

@lzg I think it's like when people say "nobody should shop at Amazon" or "nobody should keep slaves". it's a throwaway articulation of a moral stance with no theory of change or further planning behind it.
@lzg my issue is, even if you feel that way… what’s the plan? This is the stance that failed with social media, failed with ride sharing apps, failed with crypto. Even if critics were morally right to say “nobody should ever use this”, they didn’t succeed in harm reduction. And that has to matter more than smugly being “right” when the stakes are this high.
@anildash @lzg so what does harm reduction look like? What's a needle exchange or methadone for LLMs?
@emma @lzg closest thing I've seen to this is academic work on "fingerprinting" generated text. It's not an intractable problem, but it does require 'literally any will' on the part of the model vendors and is therefore a total nonstarter unless it can be used to abuse/track public citizens or whatever.

RE: https://zirk.us/@MidniteMikeWrites/115934824982363119

@SnoopJ @emma @lzg LLMs have a lot of problems. Some of these aren't technical, though, like the copyright thing is literally a political social issue.

IMO the main user harm of LLMs is that they are inherently an unscoped technology that creators are pushing as everything machines, to the detriment of users. I'm confident the costs + bubble will mitigate some of this, but we need to work with comptuer scientists and psychologists to regulate this.

@anildash @lzg what if my concern is in fact that many people are being *forced* to use them? and many sectors of society, such as education, are having these tools forced on them as well, with measurably bad impacts to those least able to bear them or resist them
@autonomousapps @anildash of course, but that's also true of surveillance tech. it's a concern about labor, education, justice. it's not necessarily about LLMs. I am interested in that better world where we mitigate the harms of these tools existing, instead of trying to hate them out of existence.

@lzg @anildash I hate them *because* the entirety of the capitalist class is determined to force them on society unilaterally (in a manner of speaking) and undemocratically.

Contrary to your point, and leaving aside the minority on mastodon, most of my professional contacts are all-in on these things *despite* all the harms; in fact I think they're willfully ignorant of the harms, for the most part.

What's your theory of change vis-a-vis harm reduction? What should we be doing?

@autonomousapps @anildash I think first we should have better arguments, and that requires keeping up with the outdated or weak ones (stochastic parrots, dubious environmental numbers, copyright based claims). IMO this is necessary for proposing regulation that makes *some* kind of sense, including achieving some price to consumers more in line with the cost of use. Just generally I think would love to broaden this conversation to more extractive tech, not only LLMs.
@autonomousapps @anildash my frustration is that, in many leftist environments, the attitude has been "I hate AI and I refuse to believe anyone could find valid use cases and I refuse to learn any more than I already know". this limits how much we can even talk about it.
@lzg @anildash I would also love to broaden the conversation to extractive tech, basically all of Silicon Valley. Too much power, no consequences for bad behavior. eg, ride "sharing" apps: completely illegal at the start, drove numerous taxi drivers to suicide, and this ultimately hasn't mattered because we don't actually live in a democratic society. LLMs are yet another example of this. I would barely care about them except it's clear the CEO class sees them as a way to rid itself of workers
@lzg @anildash in case it lends me any credibility, I recently lost my job when my company axed 40% of its workforce; they cited ai productivity as the reason. I'm now doing consulting and one of my contracts is writing a "skill" for "agents." I can see a value in writing a ~tool to solve a problem for a large class of dev. But the failure mode is so hilariously bad. The things lie all the time. And they're grossly sycophantic. I'm doing it bc I could have said no. Having agency is important

@lzg @autonomousapps @anildash As someone who is occasionally forced to review both text and code slop, sure, LLMs are useful to those who prompt them to get out of doing the work themselves, but they generate way more work than would be necessary in the first place for those who actually do work.

For example: There are unit tests, they pass, the coverage is good, but the tests assert that the business logic is wrong in the exact way in which it is wrong!

#noAI #AI #LLM #LLMs #vibeCoding #slop #genAI

@lzg @autonomousapps @anildash

I see the opposite. Studying neuro/psych around at the start of ANNs, being around big computers and stats and data since, I see a refusal on the part of LLM thrusters to acknowledge that we already knew all of this would trend to homogeneous mush.

🤷🏻‍♂️

@anildash
When enough people decide to do a bad thing, the thing doesn't become good, merely common.

You cannot really stop it.

But it's still bad, and societies eventually improve.

(crypto and LLMs are broadly scams, and ride share is a con, so their long term sustainability is not good)

@lzg

@anildash @lzg to switch areas it has largely failed the environmental movement too. Even if finally no one using ice cars is happenong its not really due to the movement. Similarly burning coal or using nuclear.

@lzg I do distinguish between "nobody should use LLMs [in general]" and "nobody should use LLMs [in this specific context" because in the latter case, there's usually room for "we'll make that policy explicit and enforce it if someone violates the trust it requires"

the former is understandable but I just route around it. I'm angry enough on my own, I don't really need the feckless ravings of others piling on

@lzg that is: I do believe in that position, but I also accept that I live in the real world where it cannot be enforced in general and I've gotta pick my battles blah blah

@lzg

It supposedly causes brain rot, so isn't that a good thing?

I am morally superior and also very intelligent.

Frankly what I don't like about it is the people that use them confronting you with the slop that you then have to untangle, they are not using it on a remote atol but right in your face, so it can be difficult to ignore.

@lzg No one should use fossil fuels. How are we going to make that happen? It's not going well, I'll admit, but it's actually relatively straightforward, it's just there are these shitty rich people pushing fossil fuels and destroying democracy to do it.

I leave the parallels as an exercise for the reader.

@skyfaller Right but we're not (I hope) out there yelling at individual people who use cars "you're destroying the fucking world" as our only means of activism. there's research, there's decarbonization goals, there's clean energy development, idk. other things.

@lzg Yeah, I'll agree with that, shaming individuals won't get the job done. Systemic change is required.

That said, if all individuals are determined to continue driving gas-powered cars, it'll be hard to make the systemic change necessary to obsolete them. Some change has to happen at the individual/cultural level as well. You can't impose that from the top down. So I think the real question is, how else can you shift individuals/culture aside from shame?

@skyfaller I think in the case of clean energy there's been some clear economic incentives. biking is good! electric cars are becoming cheaper! in my opinion LLMs are too cheap for consumers, compared to the cost of running them.in a better world they could be used in actual applications and the price of access would reflect that.

@lzg Clearly there are a lot of benefits to moving the market price closer to the true costs (rarely actually achieved). See the success of congestion pricing in NYC, for example.

But the flip side of that is that rich people get to ignore any cost because they have so much money it doesn't register. It can lead to tensions if poor people can no longer drive into Manhattan while rich people joyride in and out. There are many ways to avoid or reduce such tensions, but it should be considered.

@lzg @skyfaller One of the differences is that the oil-based economy has been well established for over a century, so it is something we have to wind down. But AI economy is very early in the process of being constructed: In the UK in the face of an energy crisis there are proposals to double our national electricity consumption to support data centres. That's insane. So it is about preventing these things before they happen or become too embedded to remove easily, as is the case with oil now.
@skyfaller @lzg also LLMs comparatively just came onto the scene, you can argue they are already too entrenched to fail but I think it's not too late

RE: https://zirk.us/@MidniteMikeWrites/116104686552109831

@aburka @skyfaller @lzg

When a technology already exists it's already too late to unmake it, the best hope is to scope it. The history of our most mundane technologies looks much like this.

@lzg i cannot wish away right wing politics, but i still morally condemn them, is that wrong?
@bikubi no you can do whatever you want. good luck.

@lzg what I’ve been seeing a lot (and experiencing myself some) reads as people basically segfaulting trying to reconcile irreconcilable requirements from their environment.

For example my current dilemma is that I’ve demonstrated to myself that these things can successfully-enough review my hand written changes to catch bugs I miss. Which leaves me with “harm billions of people with my mistakes” vs “harm billions of people by slightly contributing to demand”. I… can’t grapple with that.

@Catfish_Man yeah. it's a tough choice but sometimes I think it's in the realm of tough choices we make when we buy a new car or produce a stupid tshirt. millions of people suffering for our little goals.
@lzg What if we all used the Glower of Disapprobation
@lzg i dunno, when one side gets to believe they're bringing about the antichrist, i feel people are entitled to believe in something better no matter how impossible

@lzg I strongly recommend not using them, but I am aware I cannot prevent their use.

Why?

- LLMs get their data from the Web and books. The Web is full of mistakes, so LLMs aren't reliable.
- The few times I have tried to use them to achieve something, it failed miserably. I had to do it myself anyway.
- They're causing unreasonable environment strains (water and power consumption)
- They are causing a raise in computer parts (RAM and disks) too.

As I see it, its cost-benefit is dismal.

@lzg simply stating using a tool is immoral is not intended to be our only resistance to LLM use, nor is it a condemnation of any one. it's a condemnation of an act people can choose whether or not they do. nuclear weapons should never be used. i can't stop nuclear superpowers from using them. they make the choice whether or not they use them.

if someone feels attacked because someone is making the well-supported argument that using LLMs is immoral, that is not because they are being attacked. likely, they are having trouble dismissing that view so that they can continue to use LLMs believing they are morally justified or that the immoral grime is worth the benefit.

@tyzbit I want to hear about a better world where the harm of LLMs is mitigated, or avoided altogether. What does that world look like? abstinence only is not usually a good strategy.

@lzg making a moral statement is not a plan or order. it is a philosophical view. if we want to minimize the harm at all, first simply agreeing that it is harmful is necessary. once we agree on that, then an effective plan would have many facets including discouraging use but it may also:

  • make the real cost of LLMs more widely known (right now they are subsidized and obscured)
  • make the actual efficacy of LLMs more understood. the messaging advertises magic essentially, but has an offhand footnote that the magic is not real. maybe further development will help this, but i know of no model that is actually trained morally.
  • find justice for the fact the companies making LLMs pirated huge amounts of data illegally
  • etc

@lzg I think no one should use LLMs built via blatant disregard of source material licenses.

I also think that LLMs must not be used until their environmental impact gets to be much lower than it is now. The same way I think that ICE cars and cars in general must not be used.

Both of these can be made to happen to a sufficient degree by regulating/prosecuting commercial LLM providers.

And I do work at a company that is forcing me to use an LLM against my will or better judgement.

@lzg Well, nobody uses NFTs anymore.
@rhelune yeah maybe they were embarrassing enough
@lzg Easily. We’ll destroy the datacenters and sabotage attempts to repair / rebuild them. You can’t use them if I blow up the facilities, can you? 😁
@lzg @oberstenzian When do we start? :P
@tk @lzg Now. Start now. Really. Start now.

@oberstenzian @tk @lzg

Just like the Blue Man Group song,...

It's time to start.

https://www.youtube.com/watch?v=26REYsR0K_I

Blue Man Group - Time To Start

YouTube

@lzg I suppose I cannot force people not to use them, but I will advocate for laws that great curtail who and how they can be used.

I find them as destructive as cocaine and fentanyl. Using them for cancer pain is about the sum total of my desire to see those drugs manufactured. I cannot think of a single thing that truly needs an LLM.

@lzg

I don't understand the position of people who defend the use of LLMs in any way.

It's completely bizarre and immoral.

"They say everyone is doing it" isn't morality.