The LLM discourse on the Fediverse has really irked me the last few days.

Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

@reading_recluse

LLM are not an expression of speech nor creativity and simply digest, explore and reorder information available. They are a tool and can be useful to digest and explore information at great speed but essentially are not more than that.

For anything in opinion, creativity, art and commenting I will be looking at human expression, always..

The problem is society will be confronted with loads of LLM nonsense and disinformation in due time. Seeing it online more and more.

@xs4me2 @reading_recluse

> can be useful to digest and explore information at great speed

Nope. Still wrong. This is in fact something they are extremely and *dangerously* bad at.

@lproven @xs4me2 @reading_recluse

For generating content of any kind, I think there's a reckoning to come. Especially in the 'agentic' space.

But for Information Retrieval, LLMs are great, tbh... I'd argue that also includes those far out stories about prompts leading to new scientific theories, or mathematical proofs.

The tool is a big part of that, but it's the user ('operator'?) that writes the prompts, guides the outcomes, and validates them.

That's a worthy advance.

@dynamite_ready @lproven @reading_recluse

It is the user and their skills indeed. A hammer can be used skillfully or wrong...

@xs4me2 @dynamite_ready @reading_recluse But it can't be used for brain surgery.

No, this is not a skills issue. It is based on profound misunderstanding. No they are not good search tools. No they are not good for research or learning, because they work only and entirely by *making stuff up* and if you're learning then you're not an expert and you can't tell true from false.

@lproven @dynamite_ready @reading_recluse

In my opinion, you are incorrect here, and a user is always responsible for digesting the assumed truth as they observe it. Especially on tools. There is no substitute for critical thinking. And there never will be.

Truth and social surrounds are infinitesimally more complex than analyzing a game of chess.

@lproven @dynamite_ready @reading_recluse

LLM do not make up stuff perse, they use data, also wrong data and there is the danger, and in the fact that it cannot referee in what is right and what is wrong.