@gabrielesvelto I mean, it doesn't help that the bots are doing this bullshit: https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026-02-12-silence-in-open-source-a-reflection.html
This is clearly intended to trick humans.
I might have missed a chapter but my interpretation is someone has prompted their llm to generate this text and then posted it no? The way I saw this narrated is like the llm reacted to the prompt "PR closed" by creating a blog post. But to do that, you need an human operator no?
@kinou @Andres4NY @gabrielesvelto
Not necessarily, it just needs access to a blog post making API and some training data that got it to auto complete "I got my PR rejected because it was garbage" with "and then wrote a blog post about it".
A lot of people have provided that training data
@[email protected] We can't actually build AGI with LLMs - it's like trying to get to the moon by stacking ladders. We can, however, anthropomorphize them to a point where people think they're a valid voice in a conversation. "ChatGPT says..." - which is an incredibly charitable way to sum up a service that loosely paraphrases whatever webpage "I'm feeling lucky" would give you after filing off the serial numbers and reverse-engineering reasonable looking citations for it.
I do like the ,"this is a statistically ...".
I have an intense dislike of active verbs being applied to LLM output. Yes, the program ran, and there is output, but there is zero intention behind it.
@gabrielesvelto We can try, but you're admonishing a species that talks to potted plants and holds one-sided conversations with washing machines.
It's gonna be a steep hill, is what I'm saying.
At least potted plants are living things.
And nobody tries to say a washing machine will magically birth AGI (as far as I know).
It's not the "talking to things" part that's madness. It's the belief that a machine that can match tokens and spit out some text that resembles a valid reply is a sign of true intelligence.
When I punch in 5 * 5 into a calculator and hit =, I shouldn't ascribe the glowing 25 to any machine intelligence. It should be the same for LLM powered genAI, but that "natural language" throws us off. Our brains aren't used to dealing with (often) coherent language generated by an unthinking statistical engine doing math on giant matrices.
oooorrrr...... 'the clanker clanked out some text'! ๐
"this document contains clanker-sourced text droppings'! ๐
@gabrielesvelto this is true, it's really strange seeing non-tech people around me talk abt LLMs as if it was a sentient being, it's kinda unsettling .
Still, we need to make sure there's no lack of responsibility for the operators or users of these programs. With the hole story about the blogpost generated autonomously using an LLM as a response to a FOSS-Maintainers AU-Policy, some people kinda forgot that there is some person responsible for setting it up that way and for letting the program loose.
AI has no fucking business in the work place. These greedy fnโ execs who have replaced their human workforce with AI are the SCUM of society. The parasitic leeches who are draining all our institutions for what little value they have left. I am trying to process a claim for a patient of ours. I have to call UHC to push the claim thru. I get a fucking BOT on the line that is asking me to input some information. I do so, the BOT then says it is transferring me to an advocate. When it โdoesโโฆ. THE LINE GOES DEAD! How is this more efficient? How is this better? How is this an improvement over just having me talk to a human to start with? IT ISNโT. AI fuckinโ sucks, it has NO value and needs to be unplugged. Iโm sick of this. Iโm sick of all the cutesy names these evil corps give em #NoAi
I like the second examples. The first, the bot version, still has the bot as the active verb user, still gives it personhood.
Someone in here posted a little while back, who had worked out an acronym that indicated that the human had opted to use machine-generated language, instead of creating the whatever themselves. But I can't remember who, and I can't remember the acronym.
You're absolutely right, we have to stop giving these systems agency.
"Don't anthropomorphize LLMs. They really hate it when you do that." :)
yes. it's a lazy summarization algorithm, not even close to some kind of intelligence. helping the scammers profiting off it by making it sound intelligent or legit is just bad.