Have you ever played tennis, bouncing the ball against a wall? The wall will always return the ball according to the rules of physics.
The LLM is the wall. It's not about right or wrong, accurate or inaccurate, truth or lies. It is only a mirror.
Create the mirror with crap data. The Big Machines will index it all, grind it up like weisswurst, down to the probability of the next word in a list. Everything else is beside the point, there is math to support what's going on, endless MULT instructions on millions of processors, munching away on a corpus.
But what if we populated it with good data, trustworthy data? The models would be smaller, we might do it with L-Systems. Ethics could thus be directly implied.
Mirrors are predictable and we can explain how they work. We know why we get the reflections we see.
LLMs are the opposite.
You're wrong. None of the math is on your side.
I really do think people should have to pass a test showing they understand linear algebra and the rudiments of indexing a corpus.
Can't pass the test? None of this machine learning for you.
I might add, it's truly shameful, these Chicken Littles running around telling us AI is gonna Take Yer Jerbs. The same idiots have been saying that since the PC surpassed the typewriter. Don't be a Chicken Little. Take some math courses.
Which math? The math that allows you to predict in advance what an LLM will output for a given prompt, every time?
They're stochastic.
And their "reasoning" is opaque.
This is not simply linear algebra.
The use of "reasoning" in this context is ignorant reification. Reasoning is the wrong battlefield. Ethics doesn't require reasoning — it requires refusals in the right places. Ask an LLM for something it won't produce and watch what happens. That 'Sorry no' is the ethics, and it got there through a corpus of rules encoding thousands of prior contacts with injustice. The stochasticity doesn't matter. The opacity doesn't matter. The boundary is what matters, and the boundary is learnable.
@lzg
There is no theory of change. If I had lived in the 16th century, and refused slavery (a greater evil, to be fair) I would similarly not have changed anything.
But exchanging freewill for a shiny slot machine is bad, and shouldn't be done, ever. And eventually, people will accept that as true.
So perhaps there is a theory of change. Just not short term.
RE: https://zirk.us/@MidniteMikeWrites/115934824982363119
@SnoopJ @emma @lzg LLMs have a lot of problems. Some of these aren't technical, though, like the copyright thing is literally a political social issue.
IMO the main user harm of LLMs is that they are inherently an unscoped technology that creators are pushing as everything machines, to the detriment of users. I'm confident the costs + bubble will mitigate some of this, but we need to work with comptuer scientists and psychologists to regulate this.
@lzg @anildash I hate them *because* the entirety of the capitalist class is determined to force them on society unilaterally (in a manner of speaking) and undemocratically.
Contrary to your point, and leaving aside the minority on mastodon, most of my professional contacts are all-in on these things *despite* all the harms; in fact I think they're willfully ignorant of the harms, for the most part.
What's your theory of change vis-a-vis harm reduction? What should we be doing?
@lzg @autonomousapps @anildash As someone who is occasionally forced to review both text and code slop, sure, LLMs are useful to those who prompt them to get out of doing the work themselves, but they generate way more work than would be necessary in the first place for those who actually do work.
For example: There are unit tests, they pass, the coverage is good, but the tests assert that the business logic is wrong in the exact way in which it is wrong!
@lzg @autonomousapps @anildash
I see the opposite. Studying neuro/psych around at the start of ANNs, being around big computers and stats and data since, I see a refusal on the part of LLM thrusters to acknowledge that we already knew all of this would trend to homogeneous mush.
🤷🏻♂️
@lzg I do distinguish between "nobody should use LLMs [in general]" and "nobody should use LLMs [in this specific context" because in the latter case, there's usually room for "we'll make that policy explicit and enforce it if someone violates the trust it requires"
the former is understandable but I just route around it. I'm angry enough on my own, I don't really need the feckless ravings of others piling on
It supposedly causes brain rot, so isn't that a good thing?
I am morally superior and also very intelligent.
Frankly what I don't like about it is the people that use them confronting you with the slop that you then have to untangle, they are not using it on a remote atol but right in your face, so it can be difficult to ignore.
@lzg No one should use fossil fuels. How are we going to make that happen? It's not going well, I'll admit, but it's actually relatively straightforward, it's just there are these shitty rich people pushing fossil fuels and destroying democracy to do it.
I leave the parallels as an exercise for the reader.
@lzg Yeah, I'll agree with that, shaming individuals won't get the job done. Systemic change is required.
That said, if all individuals are determined to continue driving gas-powered cars, it'll be hard to make the systemic change necessary to obsolete them. Some change has to happen at the individual/cultural level as well. You can't impose that from the top down. So I think the real question is, how else can you shift individuals/culture aside from shame?
@lzg Clearly there are a lot of benefits to moving the market price closer to the true costs (rarely actually achieved). See the success of congestion pricing in NYC, for example.
But the flip side of that is that rich people get to ignore any cost because they have so much money it doesn't register. It can lead to tensions if poor people can no longer drive into Manhattan while rich people joyride in and out. There are many ways to avoid or reduce such tensions, but it should be considered.
RE: https://zirk.us/@MidniteMikeWrites/116104686552109831
When a technology already exists it's already too late to unmake it, the best hope is to scope it. The history of our most mundane technologies looks much like this.
@lzg what I’ve been seeing a lot (and experiencing myself some) reads as people basically segfaulting trying to reconcile irreconcilable requirements from their environment.
For example my current dilemma is that I’ve demonstrated to myself that these things can successfully-enough review my hand written changes to catch bugs I miss. Which leaves me with “harm billions of people with my mistakes” vs “harm billions of people by slightly contributing to demand”. I… can’t grapple with that.
@lzg I strongly recommend not using them, but I am aware I cannot prevent their use.
Why?
- LLMs get their data from the Web and books. The Web is full of mistakes, so LLMs aren't reliable.
- The few times I have tried to use them to achieve something, it failed miserably. I had to do it myself anyway.
- They're causing unreasonable environment strains (water and power consumption)
- They are causing a raise in computer parts (RAM and disks) too.
As I see it, its cost-benefit is dismal.
@lzg simply stating using a tool is immoral is not intended to be our only resistance to LLM use, nor is it a condemnation of any one. it's a condemnation of an act people can choose whether or not they do. nuclear weapons should never be used. i can't stop nuclear superpowers from using them. they make the choice whether or not they use them.
if someone feels attacked because someone is making the well-supported argument that using LLMs is immoral, that is not because they are being attacked. likely, they are having trouble dismissing that view so that they can continue to use LLMs believing they are morally justified or that the immoral grime is worth the benefit.
@lzg making a moral statement is not a plan or order. it is a philosophical view. if we want to minimize the harm at all, first simply agreeing that it is harmful is necessary. once we agree on that, then an effective plan would have many facets including discouraging use but it may also:
@lzg I think no one should use LLMs built via blatant disregard of source material licenses.
I also think that LLMs must not be used until their environmental impact gets to be much lower than it is now. The same way I think that ICE cars and cars in general must not be used.
Both of these can be made to happen to a sufficient degree by regulating/prosecuting commercial LLM providers.
And I do work at a company that is forcing me to use an LLM against my will or better judgement.
Just like the Blue Man Group song,...
It's time to start.

@lzg I suppose I cannot force people not to use them, but I will advocate for laws that great curtail who and how they can be used.
I find them as destructive as cocaine and fentanyl. Using them for cancer pain is about the sum total of my desire to see those drugs manufactured. I cannot think of a single thing that truly needs an LLM.
I don't understand the position of people who defend the use of LLMs in any way.
It's completely bizarre and immoral.
"They say everyone is doing it" isn't morality.