Leo Ré Jorge

@LeoRJorge
38 Followers
222 Following
566 Posts

Next time anyone claims they're using LLMs for "minor cleanup" or the like, send them this (from Google no less!)

"We find that even when LLMs are prompted with expert feedback and asked to only make grammar edits, they still change the text in a way that significantly alters its semantic meaning."

https://arxiv.org/abs/2603.18161

How LLMs Distort Our Written Language

Large language models (LLMs) are used by over a billion people globally, most often to assist with writing. In this work, we demonstrate that LLMs not only alter the voice and tone of human writing, but also consistently alter the intended meaning. First, we conduct a human user study to understand how people actually interact with LLMs when using them for writing. Our findings reveal that extensive LLM use led to a nearly 70% increase in essays that remained neutral in answering the topic question. Significantly more heavy LLM users reported that the writing was less creative and not in their voice. Next, using a dataset of human-written essays that was collected in 2021 before the widespread release of LLMs, we study how asking an LLM to revise the essay based on the human-written feedback in the dataset induces large changes in the resulting content and meaning. We find that even when LLMs are prompted with expert feedback and asked to only make grammar edits, they still change the text in a way that significantly alters its semantic meaning. We then examine LLM-generated text in the wild, specifically focusing on the 21% of AI-generated scientific peer reviews at a recent top AI conference. We find that LLM-generated reviews place significantly less weight on clarity and significance of the research, and assign scores that, on average, are a full point higher.These findings highlight a misalignment between the perceived benefit of AI use and an implicit, consistent effect on the semantics of human writing, motivating future work on how widespread AI writing will affect our cultural and scientific institutions.

arXiv.org

A fresh problem with #AI is what might be called Artificial Gullibility.

According to a BlueSky poster, an academic who was ruled guilty of plagiarism has waged an extensive astroturfing campaign to rewrite the record. The goal was probably to game conventional search engines, but the texts have now been ingested by Google's AI. Google's "AI Overview” presents her (apparently false) version of events, backing it with the supposed authority of Google and “AI”.

1/

https://bsky.app/profile/laurenginsberg.bsky.social/post/3mhnxv2swok2g

Lauren Donovan Ginsberg (@laurenginsberg.bsky.social)

The return of ReceptioGate to the news is a useful moment to think about the role AI is having in creating truth for a lot of internet users. I posted this update - the clear plagiarism verdict against Rossi - on another platform… /1 [contains quote post or other embedded content]

Bluesky Social

NEW ANALYSIS: India's CO2 emissions in 2025 grew at slowest rate in more than two decades

🏭power CO2 down 3.8%
☀️record clean-energy growth
🚗oil demand only up 0.4%
🏗️steel/cement up 8/10%

By CREA for CB

https://www.carbonbrief.org/analysis-indias-co2-emissions-in-2025-grew-at-slowest-rate-in-two-decades/

There's this myth that automated spam detection is hard because spammers are all very clever masters of disguise.

No. Spammers are stupid as a shoe. They have dog shit for brains.

Automated spam detection is hard because the line between spam and "legitimate" marketing activity is a fiction.

You can't make this shit up. The double standard is unreal.

“We briefly had a library of alexandria and then fed it into a paper shredder so advertisers could sell a random mash of pulp back to us at a premium.”

🎓 @ilovecomputers

@GillesColling
Re: independence: it's high time to be wary of the dominating position posit occupies, and its direction of travel.

#rstats

It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

But sure, no, it's really because we mourn the loss of our hobby.

A man used LLMs to generate hundreds of thousands of "songs", then used bots to stream them billions of times, to collect $8m in royalties. https://www.justice.gov/usao-sdny/pr/north-carolina-man-pleads-guilty-music-streaming-fraud-aided-artificial-intelligence-0 Is there a better metaphor for late-stage capitalism than burning resources to make songs that are never listened to, then steaming them to robots that will never hear them, ad infinitum?

29 years of #rstats community knowledge was sitting in hard-to-search pipermail archives. So I built a more modern home for it.

Introducing the R Mailing List Archives: 631,000+ messages from 32 lists, fully searchable and available as open data.

https://r-mailing-lists.thecoatlessprofessor.com/