This is always the thing I think about with LLMs. By definition, they are the statistical average of every thing ever written. Using one only pushes you into being mediocre. They are homogenizing humanity; eliminating any variation in how people speak, write, and even think.

@ngaylinn https://tech.lgbt/@ngaylinn/116284172328690293

Nate Gaylinn (@[email protected])

"In this work, we demonstrate that LLMs not only alter the voice and tone of human writing, but also consistently alter the intended meaning." "heavy LLM users reported that the writing was less creative and not in their voice." "Even when LLMs are prompted with expert feedback and asked to only make grammar edits, they still change the text in a way that significantly alters its semantic meaning." "the LLM is not merely correcting grammar, but is actively steering diverse human perspectives towards homogenization, toward a different conceptual mode." "extensive AI use results in a 70% change in the argumentative stance of essays, from for/against to neutral" "LLMs systematically reframe arguments in more positive, optimistic terms, even when the original human text may have been critical or skeptical" "LLMs have begun to change the very criteria that researchers use when evaluating peer-reviewed scientific research" http://arxiv.org/abs/2603.18161 #llm #ai

LGBTQIA+ and Tech

This is something I was thinking about when I learned that Cory Doctorow is an adamant user of Ollama, an open source LLM (whatever that means), to do spelling and grammar checking on his writing.

If I were a professional writer with an established voice, I wouldn't touch anything based on an LLM for fear that it would subtly erase that voice. So slowly you wouldn't even notice, your writing will be barely distinguishable from anyone else's.

All of the most important writing in history has been at least slightly difficult to read. Any truly novel idea is uncomfortable to a degree. It often requires stepping outside of the status quo in some way and challenging assumptions.

LLMs never challenge assumptions. They are the assumptions crystalized — freezing and anchoring cultural development to one moment in time.

Using one isn't the future. It's trapping you in the past.

This is one of the main aspects of my philosophical opposition to "generative AI" and large language models. I don't care how "useful" they might be. Making my life easier or more productive isn't a sufficient justification to submit myself to a system that fundamentally does not respect anyone's unique experience and perspective. It's a system that's biased to enforce cultural conformity and stagnation, rather than embracing diversity and evolution.

@malcircuit

You know, in reading this thread I remembered reading something about how almost all of "everything ever written" [that we have access to] happened after 1990. On the internet.

That medium famous for accuracy, rigor, and it's compassion and kindness.

So really, when we use LLMs we're taking ~4k years of human ingenuity and language, discarding it, and replacing it with subreddits, fanfic, Facebook and 4chan.

@johnzajac @malcircuit Is it unreasonable to consider the public domain?