The LLM discourse on the Fediverse has really irked me the last few days.

Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

@reading_recluse

LLM are not an expression of speech nor creativity and simply digest, explore and reorder information available. They are a tool and can be useful to digest and explore information at great speed but essentially are not more than that.

For anything in opinion, creativity, art and commenting I will be looking at human expression, always..

The problem is society will be confronted with loads of LLM nonsense and disinformation in due time. Seeing it online more and more.

@xs4me2 @reading_recluse

> can be useful to digest and explore information at great speed

Nope. Still wrong. This is in fact something they are extremely and *dangerously* bad at.

@lproven @xs4me2 @reading_recluse

For generating content of any kind, I think there's a reckoning to come. Especially in the 'agentic' space.

But for Information Retrieval, LLMs are great, tbh... I'd argue that also includes those far out stories about prompts leading to new scientific theories, or mathematical proofs.

The tool is a big part of that, but it's the user ('operator'?) that writes the prompts, guides the outcomes, and validates them.

That's a worthy advance.

@dynamite_ready @lproven @reading_recluse

It is the user and their skills indeed. A hammer can be used skillfully or wrong...

@xs4me2 @dynamite_ready @reading_recluse But it can't be used for brain surgery.

No, this is not a skills issue. It is based on profound misunderstanding. No they are not good search tools. No they are not good for research or learning, because they work only and entirely by *making stuff up* and if you're learning then you're not an expert and you can't tell true from false.

@lproven @dynamite_ready @reading_recluse

In my opinion, you are incorrect here, and a user is always responsible for digesting the assumed truth as they observe it. Especially on tools. There is no substitute for critical thinking. And there never will be.

Truth and social surrounds are infinitesimally more complex than analyzing a game of chess.

@lproven @dynamite_ready @reading_recluse

LLM do not make up stuff perse, they use data, also wrong data and there is the danger, and in the fact that it cannot referee in what is right and what is wrong.

@xs4me2 @lproven @dynamite_ready @reading_recluse

What you're essentially suggesting here, is that LLMs are only good for consuming information if the user either already has the knowledge to judge output (in which case, why are they asking?) or spends time to verify the claims that the LLM makes (in which case, why bother asking the LLM?).

I've seen them make some pretty important mistakes, including suggesting that a Director who wasn't on the call being summarised had authorised something

@ben @lproven @dynamite_ready @reading_recluse

I am suggesting that a competent user can use tools in the right way indeed and only by their in-depth knowledge of them. You can call that craftsmanship, experience, or simply domain knowledge.

It does not imply that tools nor LLM are useless, nor that they are without danger. A sharp chisel can cut off your finger. A poorly configured LLM can provide you with a load of nonsense...

@xs4me2 @ben @dynamite_ready @reading_recluse And I am disagreeing with that. I'm saying they are not appropriate for this stuff, whoever uses them and regardless of how they use them.

@lproven @ben @dynamite_ready @reading_recluse

Let us respectfully disagree then.

You are right in the sense that a lot can go wrong as I elaborated on!

Time will tell!

@ben @xs4me2 @dynamite_ready @reading_recluse No, that is not what I am suggesting at all.

You are trying to interpret my position on this through the lens of what *you* think they are good for.

@lproven @xs4me2 @dynamite_ready @reading_recluse I wasn't replying to you Liam! In fact, I largely agree with the viewpoint youve expressed

@ben @xs4me2 @lproven @reading_recluse

Exactly that.

Much like what already happens with Google, or indeed, at the library, but in a far more dynamic way.

You don't need to look too far for examples of people settling on the first Google link, or cherry picking news articles either.

I personally believe the main issues are the economic and environmental impact, intellectual property infringement, privacy, and the potential to erode critical thought.

These are huge though, obviously.