This is always the thing I think about with LLMs. By definition, they are the statistical average of every thing ever written. Using one only pushes you into being mediocre. They are homogenizing humanity; eliminating any variation in how people speak, write, and even think.

@ngaylinn https://tech.lgbt/@ngaylinn/116284172328690293

Nate Gaylinn (@[email protected])

"In this work, we demonstrate that LLMs not only alter the voice and tone of human writing, but also consistently alter the intended meaning." "heavy LLM users reported that the writing was less creative and not in their voice." "Even when LLMs are prompted with expert feedback and asked to only make grammar edits, they still change the text in a way that significantly alters its semantic meaning." "the LLM is not merely correcting grammar, but is actively steering diverse human perspectives towards homogenization, toward a different conceptual mode." "extensive AI use results in a 70% change in the argumentative stance of essays, from for/against to neutral" "LLMs systematically reframe arguments in more positive, optimistic terms, even when the original human text may have been critical or skeptical" "LLMs have begun to change the very criteria that researchers use when evaluating peer-reviewed scientific research" http://arxiv.org/abs/2603.18161 #llm #ai

LGBTQIA+ and Tech

This is something I was thinking about when I learned that Cory Doctorow is an adamant user of Ollama, an open source LLM (whatever that means), to do spelling and grammar checking on his writing.

If I were a professional writer with an established voice, I wouldn't touch anything based on an LLM for fear that it would subtly erase that voice. So slowly you wouldn't even notice, your writing will be barely distinguishable from anyone else's.

All of the most important writing in history has been at least slightly difficult to read. Any truly novel idea is uncomfortable to a degree. It often requires stepping outside of the status quo in some way and challenging assumptions.

LLMs never challenge assumptions. They are the assumptions crystalized — freezing and anchoring cultural development to one moment in time.

Using one isn't the future. It's trapping you in the past.

@malcircuit Thats a very good point actually, have to remember that argument.

Kind of ironic, I've seen videos where people "research" the unknown by giving weird prompts as input and argueing that they are soooo close to the next step in finding a theory of everything and being on the verge of knowledge.
BS - they're just getting some rubbish sentences from a language model, there is no inherent understanding of the universe. 

Then again there are people claiming that 1+1=3..

@malcircuit Found the video I meant, in case anyone is interested and/or wants to be entertained, it's "vibe physics" by Angela Collier on YT.

@chrizzly_astrocg
For sufficiently large values of 1, technically yes: 1+1 => 3
@malcircuit