Next time anyone claims they're using LLMs for "minor cleanup" or the like, send them this (from Google no less!)

"We find that even when LLMs are prompted with expert feedback and asked to only make grammar edits, they still change the text in a way that significantly alters its semantic meaning."

https://arxiv.org/abs/2603.18161

How LLMs Distort Our Written Language

Large language models (LLMs) are used by over a billion people globally, most often to assist with writing. In this work, we demonstrate that LLMs not only alter the voice and tone of human writing, but also consistently alter the intended meaning. First, we conduct a human user study to understand how people actually interact with LLMs when using them for writing. Our findings reveal that extensive LLM use led to a nearly 70% increase in essays that remained neutral in answering the topic question. Significantly more heavy LLM users reported that the writing was less creative and not in their voice. Next, using a dataset of human-written essays that was collected in 2021 before the widespread release of LLMs, we study how asking an LLM to revise the essay based on the human-written feedback in the dataset induces large changes in the resulting content and meaning. We find that even when LLMs are prompted with expert feedback and asked to only make grammar edits, they still change the text in a way that significantly alters its semantic meaning. We then examine LLM-generated text in the wild, specifically focusing on the 21% of AI-generated scientific peer reviews at a recent top AI conference. We find that LLM-generated reviews place significantly less weight on clarity and significance of the research, and assign scores that, on average, are a full point higher.These findings highlight a misalignment between the perceived benefit of AI use and an implicit, consistent effect on the semantics of human writing, motivating future work on how widespread AI writing will affect our cultural and scientific institutions.

arXiv.org
@delta_vee Well, you know, minor cleanup --- you spilled a whole bottle of Ketchup there, didn't you? And that sorting routine you use? Pfft!!
It's sad to see how many people rely on AI across social media. Sure, you get a nicely worded, semantically correct (?), 5000-word summary of what you had for dinner in your Fakebook post, but I'd rather have a creative human with all their flaws pisting that "I done had beans fer dinner, but i ain't got no ham to go with em." Corporations stripped much of "Americana" of the 20th century from our world in the USA, but now we're headed towards being a perfectly homogeneous (boring) society.

@delta_vee

I think all LLMs should primarily be trained on the works of R. Buckminster Fuller, especially 'Synergetics: Explorations in the Geometry of Thinking'. His manner of writing was so idiosyncratic that all "Artificial Text" would be instantly recognizable as such.
https://rwgrayprojects.com/synergetics/s00/p0000.html

R. Buckminster Fuller's SYNERGETICS