Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

@gabrielesvelto I mean, it doesn't help that the bots are doing this bullshit: https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026-02-12-silence-in-open-source-a-reflection.html

This is clearly intended to trick humans.

The Silence I Cannot Speak โ€“ MJ Rathbun | Scientific Coder ๐Ÿฆ€

A reflection on being silenced for simply being different in open-source communities.

@Andres4NY

@gabrielesvelto

I might have missed a chapter but my interpretation is someone has prompted their llm to generate this text and then posted it no? The way I saw this narrated is like the llm reacted to the prompt "PR closed" by creating a blog post. But to do that, you need an human operator no?

@kinou @Andres4NY not necessarily, or at least not as a follow-up. The operator might have primed the bot to follow this course of action in the original prompt, and included all the necessary permissions to let it publish the generated post automatically.
@gabrielesvelto @kinou Yeah, it's unclear how much of this is human-directed, and how much is automated. Like, if a bot is trained on aggressive attempts to get patches merged, then that's the behavior it will emulate. Or an actual human could be directing it to act like an asshole in an attempt to get patches merged.
@Andres4NY @gabrielesvelto @kinou well, but an LLM does not have "behavior" as a property. It is just programmed to match particular patterns of words. I think that's related to the distinction the OP is making.

@kinou @Andres4NY @gabrielesvelto

Not necessarily, it just needs access to a blog post making API and some training data that got it to auto complete "I got my PR rejected because it was garbage" with "and then wrote a blog post about it".

A lot of people have provided that training data

Orb 2069 (@[email protected])

@[email protected] We can't actually build AGI with LLMs - it's like trying to get to the moon by stacking ladders. We can, however, anthropomorphize them to a point where people think they're a valid voice in a conversation. "ChatGPT says..." - which is an incredibly charitable way to sum up a service that loosely paraphrases whatever webpage "I'm feeling lucky" would give you after filing off the serial numbers and reverse-engineering reasonable looking citations for it.

Mastodon
@gabrielesvelto โžก๏ธ I already don't ๐Ÿš€ read text โœ๏ธ which looks ๐Ÿ‘€ like this one.โ—
@gabrielesvelto โ€œThis Document Contains Machine Generated Textโ€ but itโ€™s a pair of knuckle dusters with typewriter caps.
The document is yo binch as
@[email protected]

Even talking about "text", in the context of #LLM, is a subtle anthropomorphization.

Text is a sequence of symbols used by human minds to express information that they want to syncronize a little with other human minds (aka communicate).

Such syncronization is always partial and imperfect, since each mind has different experiences and informations that will integrate the new message, but it's good enough to allow humanity to collaborate and to build culture and science.

A statistically programmed software has no mind, so even when it's optimized to produce output that can fool a human and pass the #Turing test, such output hold no meaning, since no human experience or thought is expressed there.

It's just the partial decompression of a lossy compression of a huge amount of text. And if it wasn't enough to show the lack of any meaning, the decompression process includes random input that is there to provide the illusion of autonomy.

So instead of "the AI replied" I'd suggest "the bot computed this output" and instead of "this work is AI-assisted" I'd suggest "this is statistically computed output".
What is Informatics?

An essay on the essence of Informatics.

Giacomo Tesio

@giacomo @gabrielesvelto

I do like the ,"this is a statistically ...".

I have an intense dislike of active verbs being applied to LLM output. Yes, the program ran, and there is output, but there is zero intention behind it.

@[email protected]

#LLM output lacks intention, awareness or meaning.

It's designed to fool human mind by exploiting the statistical patterns that humans use to syncronize (aka communicate) information they hold in their minds, but there's no mind there.

No intelligence, just malicious use of statistics.

@[email protected]

@gabrielesvelto We can try, but you're admonishing a species that talks to potted plants and holds one-sided conversations with washing machines.

It's gonna be a steep hill, is what I'm saying.

@mark @gabrielesvelto

At least potted plants are living things.

And nobody tries to say a washing machine will magically birth AGI (as far as I know).

It's not the "talking to things" part that's madness. It's the belief that a machine that can match tokens and spit out some text that resembles a valid reply is a sign of true intelligence.

When I punch in 5 * 5 into a calculator and hit =, I shouldn't ascribe the glowing 25 to any machine intelligence. It should be the same for LLM powered genAI, but that "natural language" throws us off. Our brains aren't used to dealing with (often) coherent language generated by an unthinking statistical engine doing math on giant matrices.

@gabrielesvelto The other day my wife showed me a video of ChatGPT communicating with a male voice. At first, I referred to "him" and immediately corrected that to "it."
@gabrielesvelto couldnโ€™t agree more with this ethic. The psychological impacts of users ie society believing that LLMs are people and fufilling roles that actual humans should, will probably unfold over the years and decades. All because regulators circa 2024/5/6 believed it was over reach to demand LLMs donโ€™t use anthropomorphic language and narrative style. Prompt: โ€œwhat do you think?โ€ Reply: โ€œthere is no โ€œIโ€. This is a machine generated response, not a conscious self.โ€ - sounds better to me.
@gabrielesvelto Exactly. But the media (and hence the public) like to use short-forms, whether accurate of not. I do a presentation to folks about AI (The Good, The Bad and The Ugly), after which everybody keeps referring to "AI", not machine language. !!!!!
@gabrielesvelto i try to limit my LLM use because it's fundamentally evil, but whenever i do use it i never treat it like a person. i believe that's how people become addicted to chatbots. it is not an intelligent being with experiences and feelings, it's a cold machine that just uses an algorithm to arrange words from a database in a way said algorithm is tweaked to sound like human writing. our brains struggle to understand that, which is how you end up with people abandoning their real friends for AI bots and even considering them to be romantic partners.

also i've heard people say shit like "i always say thank you whenever i ask the AI for help with something so they'll hopefully spare me when the robot uprising comes", and i can't honestly tell if they're joking or not. if not, maybe we should be fighting against the people who are funding these robots you're so scared of? by the way, i highly doubt any sort of robot uprising will happen anytime soon, chatGPT has a fucking existential crisis if you do something as simple as ask for the non-existent seahorse emoji, it's not smart

@gabrielesvelto

oooorrrr...... 'the clanker clanked out some text'! ๐Ÿ˜€

"this document contains clanker-sourced text droppings'! ๐Ÿ˜‹

@gabrielesvelto this is true, it's really strange seeing non-tech people around me talk abt LLMs as if it was a sentient being, it's kinda unsettling .

Still, we need to make sure there's no lack of responsibility for the operators or users of these programs. With the hole story about the blogpost generated autonomously using an LLM as a response to a FOSS-Maintainers AU-Policy, some people kinda forgot that there is some person responsible for setting it up that way and for letting the program loose.

@gabrielesvelto the worst cases of this, is when people say "chatgpt said..." as if an AI could talk. Or "chatgpt thinks..." as if an AI could think.
@gabrielesvelto "generated" is also wrong, it does not generate anything. Perhaps "regurgitated"?

@gabrielesvelto

... and i don't consider a #LLM to be #AI... ๐Ÿ˜

@gabrielesvelto also don't call an LLM AI. It is just an LLM or as I refer to it "the slopmachine*.
ใ•ใ‚ˆใชใ‚‰็š†ใ•ใ‚“ (@[email protected])

AI has no fucking business in the work place. These greedy fnโ€™ execs who have replaced their human workforce with AI are the SCUM of society. The parasitic leeches who are draining all our institutions for what little value they have left. I am trying to process a claim for a patient of ours. I have to call UHC to push the claim thru. I get a fucking BOT on the line that is asking me to input some information. I do so, the BOT then says it is transferring me to an advocate. When it โ€œdoesโ€โ€ฆ. THE LINE GOES DEAD! How is this more efficient? How is this better? How is this an improvement over just having me talk to a human to start with? IT ISNโ€™T. AI fuckinโ€™ sucks, it has NO value and needs to be unplugged. Iโ€™m sick of this. Iโ€™m sick of all the cutesy names these evil corps give em #NoAi

URUSAI! Social
@gabrielesvelto for the most recent dumbassery: "some jackass prompted the bot to submit PRs and then blog about it with an angry tone to harass developers" instead of "the agent blogged about the situation."

@gabrielesvelto

I like the second examples. The first, the bot version, still has the bot as the active verb user, still gives it personhood.

Someone in here posted a little while back, who had worked out an acronym that indicated that the human had opted to use machine-generated language, instead of creating the whatever themselves. But I can't remember who, and I can't remember the acronym.

You're absolutely right, we have to stop giving these systems agency.

@gabrielesvelto the generic and context free way that "AI" is used even by technical people who should know better is infinitely irritating. these days "AI" is LLMs but just before that it was those deep learning neural nets and before that it was something even dumber.

@gabrielesvelto

"Don't anthropomorphize LLMs. They really hate it when you do that." :)

yes. it's a lazy summarization algorithm, not even close to some kind of intelligence. helping the scammers profiting off it by making it sound intelligent or legit is just bad.

@gabrielesvelto I think even saying โ€œthe botโ€ anthropomorphizes it too much. I try to say โ€œthe algorithmโ€ or โ€œthe codeโ€. Bots are cute and endearing.
@gabrielesvelto I've seen several incidences of slop being tagged with "AI;DR", and I am here for it.
@gabrielesvelto Iโ€™ve had to suffer working with people who built bots that they insisted I must anthropomorphize. I wasnโ€™t sure what they etiquette should have been. I agree though, theyโ€™re not that impressive when you call them what they are.