The LLM discourse on the Fediverse has really irked me the last few days.

Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

@reading_recluse I spend a lot of my time these days trying not to send middle finger emojis to people to be honest... Probably isn't very socially acceptable, but it's how I feel.
@mollymay5000 ๐Ÿ’ฏ agree. I angrily think: how dare you. How dare you waste my precious time on earth with this trash.
@reading_recluse @mollymay5000 what if this reply were written by an AI assistant? #claw
@zzeligg @reading_recluse @mollymay5000 Exactly, how would you know? ๐Ÿ˜
@zzeligg @reading_recluse @mollymay5000 I don't need to know to block you two assholes / clowns, you've already given me enough information!

@mollymay5000 @reading_recluse
putting middle-finger emojis in my texts because almost all other emojis indicate that it was written by an LLM

(JFC I fucking hate those texts or even technical documentation full of ๐Ÿค“ โœจ ๐Ÿ˜Š ๐Ÿš€ and IDK what other shit)

@Doomed_Daniel @mollymay5000 @reading_recluse maybe explore more of what unicode has to offer? ๐“‚บ

@hippiegunnut @mollymay5000 @reading_recluse
this is awesome, how did I not know this?

why do emojis even exist when hieroglyphs were right there?!

@reading_recluse You do wear machine-woven cloth, though, no?

Seriously: Why?

It's exploitative, the quality is mediocre, it kills jobs, it's a waste of resources, consumes vast amounts of energy, hinders creativity, destroys small businesses, forces uniformity onto people ... why wear it?

Because not doing so would be a waste of time. And time is the one resource that's (still) strictly limited for all of us. We compromise on the quality of clothing (debatable), in order to do other things we couldn't if we were still weaving cloth manually.

When mechanical weaving machines came about, the workers threw their wooden shoes, in French 'Sabot', into the machines to stop them.

All that is left of this effort is a word describing the futile attempt: Sabotage.

So protest all you like, it's just not going to get you anywhere.

@papageier @reading_recluse a) clothing is to some degree essential. A clothing industry has to exist.
b) we may still complain about the bad practices of said industry, do what we can to mitigate it, demand legislation to regulate it, choose providers that operate more repsonsibly to the degree that we can afford it. Plenty of people with the skills still actually make some of their own clothes. We don't have to silently accept the bad things.
eta: what the AI companies want to sell is not the clothing, but the machine to enslave the people who make the clothing.

@papageier
You are right that there has always been a protest to mechanising jobs. Black smiths when a nail cutting machine was invented for example.

There is a difference here since a notable portion of its function is at the academic level.

So let's say I need to write a book report, instead of reading the book I read an LLM summary, then write and publish my report.

Next person comes along and does the same thing, except now the LLM
is referencing my report based on an LLM summary. This repeats until all academic value has been drained from the source material. Are we at a net gain or loss of intelligence after this happens?
@reading_recluse @papageier

@brokenshell @reading_recluse I honestly don't know. I had presumed LLM output to deteriorate over time, as AI output appears on the Web and is used to train NextGen AI. However, so far I stand corrected. Latest LLM versions are doing astonishingly well, and the limits are not yet in sight.

Yes, it is probably a hard experience for academics to suddenly face the same fate as simple workers (like weavers) 150 years ago. Because they always felt superior, and therefore safe? Maybe. This alone should teach us a lesson.

But the underlying truth is: if you can automate something in a disruptive manner, someone will always do it. All others have no choice: follow suit, find a niche or suffer economic death.

@papageier
The problem with the weaver analogy is that imagine the weaving machine changed the sweater slightly each time. After enough time the sweater isn't going to fit a human making it useless.
That's closer to the reality of LLMs

I would love to see LLMs used to their full potential in manufacturing, but I have my doubts when they are being used to replace human comprehension.
Time will tell!
@reading_recluse

@brokenshell @reading_recluse I don't know what the quality of early mechanical looms was; I presume there was also some space for improvements.

However, the analogy was chosen with a grain of salt. Of course the situation is not one-and-the-same, but - it's close enough for arguments sake.

@papageier @reading_recluse I've done some weaving with a manual loom, and I think your attempt to draw a parallel between machine weaving and LLMs is absurd, wrong in most of the specifics and missing the point of much LLM criticism.

@papageier @reading_recluse

Go back and read up on what the Luddites were actually protesting, jackass. They were not mindless technophobes.

Machine-woven cloth IN AND OF ITSELF is NOT inherently exploitative. It could have been used instead to elevate and improve the textile trade, making life easier for the workers.

Instead, the way the capitalists weaponized the tech to devalue labor was fucking evil.

Tech is not inherently good or bad. It's just a tool.

"AI" and LLMs, as they are currently being designed and deployed, are a tool being used as a WEAPON. Child-raping technofascist planetwreckers are using them to enclose the digital commons, jam any useful signals they don't control, and surveil the everloving shit out of everyone everywhere.

If we don't protest like our lives depend on it, NOW, things are going to get unimaginably and horrifyingly fucking bad.

@papageier @reading_recluse what a load of brain rotted crap
@sortius @reading_recluse What a simplistic answer. Care to elaborate?
@papageier @reading_recluse nope. I don't put effort into brain-dead pro-LLM "people"

@papageier @sortius @reading_recluse You've basically shown you know very little both about industry history or the reasons behind the mechanizing of it and the current forced pushing of LLM in modern companies.

Please document yourself.

@Enthalpiste @sortius @reading_recluse While it is certainly possible that there were also other economic reasons behind the rise of mechanical looms, the fundamental driver was the same that now drives the progress of AI/LLM: improve productivity, reduce costs, make more money. Frederick Winslow Taylor all over. If you don't believe it, I have nearly a thousand billion dollars in the market to prove my point.

So documented.

Now, let's hear your expertise, shall we?

@papageier @sortius @reading_recluse It is more complicated than that and private industrial financial incentives are only a restrained and poor explanation. There has always been arguments for the state to keep people in the campaign and the economy crafts centered, this keeps people busy in the fields and goods of high quality reserved to the wealthiest as they require high amount of labour to be made. The push for mamechanization is essentially caused by a combination of extrinsic and intrinsic factors, mainly armement and colonialist expansions for colonies exploitation. The industrialisation is a by-product of that. This is discussed in the scientific literature: https://press.princeton.edu/books/paperback/9780691247489/the-wealth-of-a-nation

Also I teach basics in scientific study of work in an industry oriented curriculum at university and integrated such informations as part of one of my introductory classes. Taylors view are interesting to know in this context but clearly outdated and historically incorrect.

The Wealth of a Nation

How the development of legal and financial institutions transformed Britain into the worldโ€™s first capitalist country

@Enthalpiste @sortius @reading_recluse As I have mentioned above: there may have been other factors behind mechanical looms. An argument putting colonial interests in the driver's seat is, however, equally misguided. It was Adam Smith (if I recall correctly) in his Inquiry into the Wealth of Nations who emphasized that national prosperity is always a secondary effect of individuals seeking personal profit.

If that is not entirely off the mark (and I'd like to say so much for poor old A. Smith), then the same is true for LLMs.

AI has military implications for sure, and may have - horribile dictu - even neo-colonial uses. But if you are in the IT industry where manual implementation is the most expensive bottleneck, then using AI to implement plain old Taylorism is a perfectly sound strategy. No military or national or colonial interests required.

Call it poor judgement on my side if you will, but this is what I see and what I hear from fellow IT managers. ๐Ÿคท

@papageier @reading_recluse

This is such a lazy argument:: You can WEAR clothing.

No one wants to read AI generated text, AI images are hideous. Beyond some niche industrial cases, which are not the focus of the hyperscalers, LLM's are generally useless and a massive waste of resources. The entire industry is based on a speculative, utopian fantasy of created an AGI that will solve all problems. It's utopian fantasy mixed with sunk cost fantasy.

It's like saying "why eat human shit when you can now eat robot generated shit.

Also sabotage and worker resistance got workers everything they ever had.

@mook @reading_recluse Do you code? If so, do me a favor: get yourself a Claude Code account for a month (it's not very expensive), and try it out. I don't mean to advertise. I just found out myself just recently that my 9 months old experience of LLMs delivering pretty questionable results is outdated and invalid.

You can create useful, realistic stuff with recent models. Not perfect. Not huge projects. But not disposable code, either. Let those models improve like that for another 2-3 years, and the entire discussion will be futile. LLMs may then be faster and significantly better than your average developer. Then what? Refuse to use the software, because LLMs have always been shitty?

@papageier @reading_recluse

no i'm not a slop coder and I don't respect anyone who is, you're obviously a bot lol
@papageier @reading_recluse
Hand woven clothes are not generally superior, tho.
You know that, right? I mean, you do, right?
Hand weave a cotton t-shirt, please. Or a fleece jacket. Or tights. I would like to see that done.
LLMs are inherently racist, sexist, and reductive, because the online society they sample is racist, sexist, amd reductive. It is baked in.

@Okanogen @reading_recluse I do know, and you are right. But from a code weavers point of view, I don't care about the political bias of a tool. I care about its speed, versatility and code quality. And they are starting to look good.

I'm a bit tired of the argument that LLMs are dog shit, period. My own hands-on experience as someone who has been coding for 25 years says otherwise: this is a valid tool. I am now a decision maker. I have to decide what to do with it. If it works, I have the mechanical loom situation, precisely.

And lets not forget: companies who refused to use those looms went out of business rather sooner than later. Almost all of them.

@papageier @reading_recluse
The original post said absolutely zero about "code" and neither did I, but hit dogs will holler.

@Okanogen @reading_recluse Hit dogs will holler ๐Ÿค” - I never knew the correct English pendant to my German "getroffene Hunde heulen". Quite a literal translation actually. Thanks for that.

Wrt code - I didn't catch your point. Code happens to be my domain, yes. LLMs work in other domains as well. They are one and the same tool and work for text exactly as they do for code. For music or images, the technical approach is a bit different, but the mechanism is the same. So what is your argument? That my example is about 'just code', as opposed to 'real art'?

@papageier @reading_recluse
LLMs "work" in other domains is a supposition, not a fact. A very, very, very much rejected supposition.
There is no fucking thing as "artificial intelligence". Intelligence is either real or not real. What is kicked out to you isn't "code" it is a model of code. A facsimile. As a professional geologist I have made thousands of models, we don't pretend that they actually ARE the reality. The map is not the territory.

@Okanogen @reading_recluse Being a software guy, I am not going to argue about geology LLM results. I hope you trust my expertise in turn when I say: yes, it is code. And it is way better than it used to be half a year ago. It still needs some manual review and fixes, but it is perfectly usable.

Unless you are arguing from a philosophical point of view, saying that code is not code unless an actual coder has written it? In the same way cloth'd be no cloth if a weaver hasn't woven it. I'm afraid that would leave most of us pretty naked, at least philosophically.

@papageier
If you trust a large language MODEL with your employment, well, best of luck to you. I'm trying to point out that the word MODEL is telling you exactly what it is. It is a MODEL of the code you are asking for based on how you input the request.
The map is not the territory. Look up that phrase.
@papageier @reading_recluse All that's left of this is the union movement, e.g. workers' rights, 5 day working week, paid time off, etc etc
Dude. AI is just a statistics machine that you lie to so it will give you something it thinks, statistically, will fit what you said to it

@tinfoilchefspickaxe @reading_recluse Which is, from a software programmer's point of view, exactly what I need it to do.

I understand what LLMs are. I am neither pro nor con. I'm an observer. I observe a disruptive tool and compare it to other disruptive tools, to estimate how this may play out in the long run. And my estimate is: we are all going to wear machine-woven cloth. I don't insist this is correct, but I think my arguments are sound.

I could have chosen other disruptive technologies as well, like, the Web and its effect on Journalism for comparison, but looms seemed more fitting.

@papageier @reading_recluse Using a tool to neatly fold a thread upon itself millions of times to produce a sheet of cloth is not the same thing as rolling weighed dice to randomly generate what looks like writing. You are comparing apples to dog shit.

I'm pretty sure the people who protested textile factories were protesting the horrible working conditions, not loss of skilled labor jobs. There was some resentment from skilled laborers for less skilled laborers who could use more advanced tools to produce more textile faster & at a higher quality, but it was overwhelmingly not that & these two groups were on the same side of the protests. I can't find much about Franceยน, but this is how it was in the USA & England. In English, people who refuse to adopt new tools to improve their work while at the same time making it easier & prefer to instead never learn or even worsen are called Luddites.

@papageier @reading_recluse People who recognize when a method of producing something results in an inferior, or even nonexistent, product & seek to improve instead of worsen are called normal.

ยน Due to data pollution by LLM companies it's no longer easy to find information quickly. At this moment I don't have time to do a more in depth search & filter out the noise, & I don't want to spare the time for a Luddite anyway.

@papageier @reading_recluse LLMs or agentic IA doesn't gain time for anybody, just produces junk that a human has to review and repair at an incredible cost: tons of energy, water to cool, billions in investement (that could have been better put in solving REAL PROBLEMS INSTEAD OF CREATING NEW ONES) and stepping over the the property rights of almost anybody who has ever upped anything to the Internet. "Like it or not is here to stay" is a woeful argument, then the sabotage, the activism...
@papageier @reading_recluse And the middle finger. Its not a tantrum against the unavoidable scientific progress or something like that, but a scream pointing out things that should be already aknowledge and fought back.

Clothing is useful and necessary.

Autocorrect on steroids is neither.

@papageier @reading_recluse Your point is bad and you should feel bad.

@reading_recluse

LLM are not an expression of speech nor creativity and simply digest, explore and reorder information available. They are a tool and can be useful to digest and explore information at great speed but essentially are not more than that.

For anything in opinion, creativity, art and commenting I will be looking at human expression, always..

The problem is society will be confronted with loads of LLM nonsense and disinformation in due time. Seeing it online more and more.

@xs4me2 @reading_recluse

> can be useful to digest and explore information at great speed

Nope. Still wrong. This is in fact something they are extremely and *dangerously* bad at.

@lproven @xs4me2 @reading_recluse

For generating content of any kind, I think there's a reckoning to come. Especially in the 'agentic' space.

But for Information Retrieval, LLMs are great, tbh... I'd argue that also includes those far out stories about prompts leading to new scientific theories, or mathematical proofs.

The tool is a big part of that, but it's the user ('operator'?) that writes the prompts, guides the outcomes, and validates them.

That's a worthy advance.

@dynamite_ready

The problem is that LLMs just make things up. There are no new discovers, there is no accurate information retrieval. But people don't notice, because they lack the expertise, they lack the ability to check.

LLMs cannot be trusted with anything. They are a sheer waste of our world's resources.

@lproven @xs4me2 @reading_recluse

@dynamite_ready @lproven @reading_recluse

It is the user and their skills indeed. A hammer can be used skillfully or wrong...

@xs4me2 @dynamite_ready @reading_recluse But it can't be used for brain surgery.

No, this is not a skills issue. It is based on profound misunderstanding. No they are not good search tools. No they are not good for research or learning, because they work only and entirely by *making stuff up* and if you're learning then you're not an expert and you can't tell true from false.

@lproven @dynamite_ready @reading_recluse

In my opinion, you are incorrect here, and a user is always responsible for digesting the assumed truth as they observe it. Especially on tools. There is no substitute for critical thinking. And there never will be.

Truth and social surrounds are infinitesimally more complex than analyzing a game of chess.

@lproven @dynamite_ready @reading_recluse

LLM do not make up stuff perse, they use data, also wrong data and there is the danger, and in the fact that it cannot referee in what is right and what is wrong.