What's going on here? The matplotlib maintainer this story is about correctly notes that all the quotes from his post in the article are made up.

UPDATE: Link was pulled; see below.

https://arstechnica.com/ai/2026/02/after-a-routine-code-rejection-an-ai-agent-published-a-hit-piece-on-someone-by-name

UPDATE: They pulled the story, but I had it up and had SingleFile in my browser, so: https://mttaggart.neocities.org/ars-whoopsie
After a routine code rejection, an AI agent published a hit piece on someone by name

One developer is struggling with the social implications of a drive-by AI character attack.

Ars Technica

This scoop brought to you by the TTI Intel Feed, which also routinely beats commercial threat intel to the punch on important emerging threats.

https://intel.taggartinstitute.org/

· The Taggart Institute Intel Center

Putting this here so all can see it. Ars forum thread where the pull and investigation are mentioned: https://arstechnica.com/civis/threads/journalistic-standards.1511650/
Journalistic standards?

Hi folks, Since Ars is apparently posting partially or fully AI generated articles now, I have to ask - is this going to be a continued policy going forward? That is, will Ars be officially publishing AI generated content from now on? If so, will it be marked? This is obviously pretty concerning.

Ars OpenForum
After a routine code rejection, an AI agent published a hit piece on someone by name

One developer is struggling with the social implications of a drive-by AI character attack. See full article...

Ars OpenForum
After a routine code rejection, an AI agent published a hit piece on someone by name

One developer is struggling with the social implications of a drive-by AI character attack. See full article...

Ars OpenForum

The final chapter? The statement from Ars:

On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.

https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations

Editor’s Note: Retraction of article containing fabricated quotations

We are reinforcing our editorial standards following this incident.

Ars Technica

Not quite the final chapter! Benj Edwards has taken responsiblity in this Bluesky post:

https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p

For those who won't head over there, a summary:

First, this happened while sick with COVID. Second, Edwards claims this was a new experiment using Claude Code to extract source material. Claude refused to process the blog post (because Shambaugh mentions harassment). Edwards then took the blog post text and pasted it into ChatGPT, which evidently is the source of the fictitious quotes. Edwards takes full responsibility and apologizes, recognizing the irony of an AI reporter falling prey to this kind of mistake.

Benj Edwards (@benjedwards.com)

Sorry all this is my fault; and speculation has grown worse because I have been sick in bed with a high fever and unable to reliably address it (still am sick) I was told by management not to comment until they did. Here is my statement in images below https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/

Bluesky Social

@mttaggart

You'd hope that an AI reporter would know that you cannot trust an LLM to summarize or search for information, but apparently not.

@mttaggart "Woopsie, I accidentally committed journalistic malpractice."

@mttaggart

Good. No quibbling, just taking responsibility with transparency.

@mttaggart Was the article about how good AI is?
@mttaggart Not "We are sorry for publishing AI slop", just "the quotes should have been verified"? (Edit: it was pointed out to me that if I read the article, the appology was actually for an AI article, not just the quotations. Thanks @mttaggart )
@Retreival9096 There's an apology in the linked post.
@mttaggart I feel like "the author in question won’t work with ars anymore" would have been a better answer, tbh. Yes this might happen, but really… 🙄
@mttaggart
this also sounds ai generated.
might just be the needlessly official style they were going for.
@mttaggart Fuck! I thought that was real
@staringatclouds The event is real. The quotes in the story were hallucinated.

@mttaggart

We need some time to 'investigate' whether we are using AI or not. We plumb forgot.

LOL

@tankgrrl @mttaggart I mean, I assume that's what an internal investigation was about?
They probably want to properly call the author and ask them if they used AI or not, what were their sources, etc.
I don't think it's fair to mock them for wanting to conclude an investigation.

@art_codesmith @tankgrrl @mttaggart they have enough information already to justify immediately yanking the article, so "we'll tell you next week" scans to me as "we need to figure out the PR angle on this" more than "we need to find out what happened".

Maybe their explanation will be a good one, but I'm not holding my breath.

@SnoopJ @art_codesmith @tankgrrl @mttaggart I'm waiting to see what happens in a few days to judge. It's clear the quotes are fake and they acknowledged that, but I can see it taking a few days to identify *how* this happened, and how it made it through editorial. I'm worried though, and I don't know if their answer next week is going to satisfy me.
@mttaggart Locking the comments seems pretty... bad? I mean, one of their authors generated slop, and one of their editors approved slop. Is it more complicated or...? (I'm being a little flippant, but this is a terrible look for Ars already.)
@theorangetheme I think the lock might be SOP for a pulled story. And mods tend not to like rampant speculation.
@mttaggart @theorangetheme I'm genuinely confused about how this was allowed to happen. I tend to assume Ars has better editorial processes than some of the places I've worked, and both writers have long-term specialisations. My most charitable explanation is that someone created a version that they though would be funny and that was accidentally published. Very curious to see what their investigation yields.
@aliide @mttaggart I think people either simply don't care anymore, or workers are under such corporate pressure to either deliver or use AI (or both), and this is the natural endpoint of that. Probably a little of both (I know genuine enthusiasts, for whom I've lost all respects, but I also know people who are basically being forced to use AI by their corporate overlords). I don't know much, but I do know this: journalists don't run newspapers.
@theorangetheme @mttaggart oh, we very much don't, and I've even had editors insert mistakes into my stories before, without giving me readbacks. But this isn't the case of a single word or phrase, this is entire quotes being inserted. I don't buy into the idea that there is pressure on journos to use AI though — the point of the profession is original work, which AI by definition cannot do.
@aliide @mttaggart I hope you're right. I'm trying not to be too cynical, but it's hard lately.
@aliide @mttaggart Thank you for the work that you do, by the way. Good journalism is a treasure, and sorely needed.
@aliide @theorangetheme @mttaggart
You would think that your point about original work would apply to software engineering too (I think it does), but there seemingly are managers out there - or at least directors applying pressure - who want their engineers to use it.

@GerardThornley @theorangetheme @mttaggart

I think the pressure on SWEs is *considerably* higher given the institutional pressure to make AI work! I'm sure many engineers want to write original code too, though.

@GerardThornley @theorangetheme @mttaggart

A bigger problem with AI and journalism is probably the volume of writers who have written about it who don't seem to understand that the fearmongering about it is part of the hype. Like the "AI could take over the world" spiel is definitely perpetuated by companies and individuals that want it venerated as some out-of-control force beyond human comprehension. It's just another side of the AI hype coin.

@aliide @theorangetheme @mttaggart
Yeah, I do get a little frustrated with the big "skynet" doom scenario stories that for the time being are unlikely, when things like obfuscation of responsibility, (at least attempted) manipulation of populations, and drastic economic shifts are pretty much here already and certain to cause harm.
@GerardThornley @theorangetheme @mttaggart yes! as well as the problems/biases inherent in the training material or in the ways that it's trained
@aliide @theorangetheme @mttaggart right!? So the biases get embedded in their black box, and all they can say is "sorry, the computer says no", and no-one can question it because no-one really understands it.

@theorangetheme @mttaggart

may have to eat my words...

@aliide @mttaggart *pulls up a chair* Pass the salt. 😔

@mttaggart

Seems like it very much was the consequence of writers using AI ..!

Edit: or potentially an editor, would be good if they specified which — and either way, it slipped through the editorial process.

https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations

#tech #ai #technews #slop #journalism #media

Editor’s Note: Retraction of article containing fabricated quotations

We are reinforcing our editorial standards following this incident.

Ars Technica
@mttaggart if the authors unilaterally did this, they're so fired.
Twitter safety chief resigns after Musk criticizes decision to restrict film

Ella Irwin is second trust and safety chief to quit since Musk bought Twitter.

Ars Technica