This is always the thing I think about with LLMs. By definition, they are the statistical average of every thing ever written. Using one only pushes you into being mediocre. They are homogenizing humanity; eliminating any variation in how people speak, write, and even think.

@ngaylinn https://tech.lgbt/@ngaylinn/116284172328690293

Nate Gaylinn (@[email protected])

"In this work, we demonstrate that LLMs not only alter the voice and tone of human writing, but also consistently alter the intended meaning." "heavy LLM users reported that the writing was less creative and not in their voice." "Even when LLMs are prompted with expert feedback and asked to only make grammar edits, they still change the text in a way that significantly alters its semantic meaning." "the LLM is not merely correcting grammar, but is actively steering diverse human perspectives towards homogenization, toward a different conceptual mode." "extensive AI use results in a 70% change in the argumentative stance of essays, from for/against to neutral" "LLMs systematically reframe arguments in more positive, optimistic terms, even when the original human text may have been critical or skeptical" "LLMs have begun to change the very criteria that researchers use when evaluating peer-reviewed scientific research" http://arxiv.org/abs/2603.18161 #llm #ai

LGBTQIA+ and Tech

This is something I was thinking about when I learned that Cory Doctorow is an adamant user of Ollama, an open source LLM (whatever that means), to do spelling and grammar checking on his writing.

If I were a professional writer with an established voice, I wouldn't touch anything based on an LLM for fear that it would subtly erase that voice. So slowly you wouldn't even notice, your writing will be barely distinguishable from anyone else's.

All of the most important writing in history has been at least slightly difficult to read. Any truly novel idea is uncomfortable to a degree. It often requires stepping outside of the status quo in some way and challenging assumptions.

LLMs never challenge assumptions. They are the assumptions crystalized — freezing and anchoring cultural development to one moment in time.

Using one isn't the future. It's trapping you in the past.

This is one of the main aspects of my philosophical opposition to "generative AI" and large language models. I don't care how "useful" they might be. Making my life easier or more productive isn't a sufficient justification to submit myself to a system that fundamentally does not respect anyone's unique experience and perspective. It's a system that's biased to enforce cultural conformity and stagnation, rather than embracing diversity and evolution.

Cory Doctorow has gone on record that he believes the anti-AI backlash has its roots in "purity culture" — that people find genAI systems offensive because they were derived from morally corrupt sources, thus tainting the entire endeavor. I can only speak for myself, but my aversion to LLM and genAI has nothing to do with maintaining "purity" in any way.

To me, it has to do with this idea of LLMs being a force of homogenization and mediocrity.

I have a hard time understanding how anyone that cares deeply about some problem would find a mediocre solution acceptable. Every time I hear someone promote how some AI system can do such and such, what I hear is a person who is saying they are okay with mediocre output in this situation. They are basically saying that they don't actually care at all, and that you shouldn't either.

That's what I find so offensive: a kind of supremely arrogant nihilism and apathy.

@malcircuit They care about making money. Not having a big enough number on their crypto app is the only problem they care about solving and AI coding solves it.

@malcircuit

You know, in reading this thread I remembered reading something about how almost all of "everything ever written" [that we have access to] happened after 1990. On the internet.

That medium famous for accuracy, rigor, and it's compassion and kindness.

So really, when we use LLMs we're taking ~4k years of human ingenuity and language, discarding it, and replacing it with subreddits, fanfic, Facebook and 4chan.

@johnzajac @malcircuit Is it unreasonable to consider the public domain?

@malcircuit I feel insane, I don’t think most technology throughout the last 20+ years *ever* did what you describe.

I feel like such an outsider thinking that most of this cursed industry never respected anyone. Including a lot of FOSS etc. too.

Not an AI defense but rather my pov on a fundamentally broken system.

Venture capital has ruined nearly every company I’ve worked for… far before any of this, and any diversity was tolerated only to the point it allowed them to hit deliverables.