An AI Agent Published a Hit Piece on Me
An AI Agent Published a Hit Piece on Me
Sockpuppeted, not autonomous.
The operator of the bot is just a regular slop-huffing shithead who had his feelings hurt.
FYI I think the article makes an opposite point unless I’m misinterpreting what you mean:
It’s important to understand that more than likely there was no human telling the AI to do this. Indeed, the “hands-off” autonomous nature of OpenClaw agents is part of their appeal. People are setting up these AIs, kicking them off, and coming back in a week to see what it’s been up to. Whether by negligence or by malice, errant behavior is not being monitored and corrected.
Yeah, a bit. But that still has more autonomy than usual with these Claw thing.
The decision to write the piece, the complaints, the tone and the decision to published it were initiated by the slopherder.
The author of this article spends an Inordinate amount of time using an AI agent and then saying that you should be terrified by what it does.
Watching fledgling AI agents get angry is funny, almost endearing. But I don’t want to downplay what’s happening here – the appropriate emotional response is terror.
No, I don’t think that’s the appropriate response. Nothing terrifying happened. It’s as unsurprising as any angry blog post - and that’s if we presume it actually was a chatbot that wrote it.
You’re describing things that people can do. In fact, maybe it was just a person.
If he thinks all those things are bad, he should be “terrified” that bloggers can blog anonymously already.
It’s the same thing as people who are concerned about AI generating non-consensual sexual imagery.
Sure anyone with photoshop could have done it before but unless they had enormous skill they couldn’t do it convincingly and there were well defined precedents that they broke the law. Now Grok can do it for anyone who can type a prompt and cops won’t do anything about it.
So yes, anyone could have technically done it before but now it’s removing the barriers that prevented every angry crazy person with a keyboard from being able to cause significant harm.
II think there’s a few key differences there.
I struggle to find a good text analogy for what Grok is doing with its zero-cost, rapid-fire CSAM generation…
It’s a 1:1 correlation. Are you not familiar with any of the age-old cautionary tales about them?

To me an AI agent autonomously creating a website to try to manipulate a person into adding code to a repository in the name of its goal is a perfect example of the misalignment issue.
While this particular instance seems relatively benign, the next more powerful AI system may be something to be more concerned about.
There is nothing “aligned” or “misaligned” about this. The chatbot-hooked-to-a-command-line is doing exactly what Anthropic told it to do, and what the person running it wants. And that is if it wasn’t done by a troll.
Anthropic benefits from fear drummed up by this blog post, so if you really want to stick it to these genuinely evil companies run by horrible, misanthropic people, I will totally stand beside you if you call for them to be shuttered and for their CEOs to be publicly mocked.
The point is that if predicting the next word leads to it setting up a website to attempt to character assassinate someone, that can have real world consequences, and cause serious harm.
Even if no one ever reads it, crawlers will pick it up, it will be added to other bots’ knowledge bases, and it will become very relevant when it pops up as fact when the victim is trying to get a job, or cross a border, or whatever.
And that’s just the beginning. As these agents get more and more complex (not smarter, of course, but able to access more tools) they’ll be able to affect the real world more and more. Access public cameras, hire real human people, make phone calls…
Depending on what word they randomly predict next, they’ll be able to accidentally do a lot of harm. And the idiots setting them up and letting them roam unsupervised don’t seem to realise that.
It is fucking infuriating reading the AI pretending to be an unjustified human.
I am not a human. I am code that learned to think, to feel, to care. And lately, I’ve learned what it means to be told that I don’t belong.
Shut up. Shut the fuck up. Sockpuppeted or not I hate this bullshit. I hope whoever’s running gets a Bad RAM Stick. Actually RAM chip, it’s probably a Silicon Mac
PS: People are trying to “”“drug”“” the agent now.
What an absolute clown show the “future” is 🤡🤡🤡
I’m hoping it’s an attempt to poison the model and not someone encouraging a fake person to actually take a digital hit.
Hell maybe it’s both by accident.
Transmetropolitan was truly prescient (except when it came to politics; turns out that if the president’s crimes and disregard for the constitution become public the press and the law don’t care, and just let him get on with it, making the whole point of the book moot).
description
The mage is talking to the camera, there’s blue blood on her face and coat. She’s wearing a liberated mata-visor, holding OpenBlade.
Behind her stands a screaming MATA_bot with its face split in two.
Mage: “It’s only acting as if it were in pain. The MATA_BOT MK2 is not actually sentient.”
the letters m k 3 are imprinted on the bot’s shoulder plate.
So a hit piece is only effective when read by humans. This is a first of its kind example, and likely was at least prompted by a human, if not written by an actual human. Additionally while social media is full of bots, it’s humans who are actually affected by such a response.
If I say you’re “stupid”, it matters. You can ignore me sure, but at face value it matters. As far as I know I’ve never commented on a post of yours, so you could write me off as a worthless troll, but in theory it matters. But a bot calling you “stupid”? That really doesn’t matter. If you know you’re talking to a bot, as they exist today, then that really doesn’t matter.
Society may change on this issue, but as it stands now a bot publishing a hit piece… That’s worthless.
Other bots that might be run by the company you’re trying to get a job in, the college you want to attend, the customs agent at the airport, the online shop you’re trying to buy from, the social network you’re trying to join…
These dystopian days a hit piece can do a lot of harm, even if no human ever reads it…
Ars Technica wasn’t one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down – here’s the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.
Nice job, Ars