What's going on here? The matplotlib maintainer this story is about correctly notes that all the quotes from his post in the article are made up.

UPDATE: Link was pulled; see below.

https://arstechnica.com/ai/2026/02/after-a-routine-code-rejection-an-ai-agent-published-a-hit-piece-on-someone-by-name

UPDATE: They pulled the story, but I had it up and had SingleFile in my browser, so: https://mttaggart.neocities.org/ars-whoopsie
After a routine code rejection, an AI agent published a hit piece on someone by name

One developer is struggling with the social implications of a drive-by AI character attack.

Ars Technica

This scoop brought to you by the TTI Intel Feed, which also routinely beats commercial threat intel to the punch on important emerging threats.

https://intel.taggartinstitute.org/

· The Taggart Institute Intel Center

Putting this here so all can see it. Ars forum thread where the pull and investigation are mentioned: https://arstechnica.com/civis/threads/journalistic-standards.1511650/
Journalistic standards?

Hi folks, Since Ars is apparently posting partially or fully AI generated articles now, I have to ask - is this going to be a continued policy going forward? That is, will Ars be officially publishing AI generated content from now on? If so, will it be marked? This is obviously pretty concerning.

Ars OpenForum
@mttaggart Locking the comments seems pretty... bad? I mean, one of their authors generated slop, and one of their editors approved slop. Is it more complicated or...? (I'm being a little flippant, but this is a terrible look for Ars already.)
@theorangetheme I think the lock might be SOP for a pulled story. And mods tend not to like rampant speculation.
@mttaggart @theorangetheme I'm genuinely confused about how this was allowed to happen. I tend to assume Ars has better editorial processes than some of the places I've worked, and both writers have long-term specialisations. My most charitable explanation is that someone created a version that they though would be funny and that was accidentally published. Very curious to see what their investigation yields.
@aliide @mttaggart I think people either simply don't care anymore, or workers are under such corporate pressure to either deliver or use AI (or both), and this is the natural endpoint of that. Probably a little of both (I know genuine enthusiasts, for whom I've lost all respects, but I also know people who are basically being forced to use AI by their corporate overlords). I don't know much, but I do know this: journalists don't run newspapers.
@theorangetheme @mttaggart oh, we very much don't, and I've even had editors insert mistakes into my stories before, without giving me readbacks. But this isn't the case of a single word or phrase, this is entire quotes being inserted. I don't buy into the idea that there is pressure on journos to use AI though — the point of the profession is original work, which AI by definition cannot do.
@aliide @theorangetheme @mttaggart
You would think that your point about original work would apply to software engineering too (I think it does), but there seemingly are managers out there - or at least directors applying pressure - who want their engineers to use it.

@GerardThornley @theorangetheme @mttaggart

I think the pressure on SWEs is *considerably* higher given the institutional pressure to make AI work! I'm sure many engineers want to write original code too, though.

@GerardThornley @theorangetheme @mttaggart

A bigger problem with AI and journalism is probably the volume of writers who have written about it who don't seem to understand that the fearmongering about it is part of the hype. Like the "AI could take over the world" spiel is definitely perpetuated by companies and individuals that want it venerated as some out-of-control force beyond human comprehension. It's just another side of the AI hype coin.

@aliide @theorangetheme @mttaggart
Yeah, I do get a little frustrated with the big "skynet" doom scenario stories that for the time being are unlikely, when things like obfuscation of responsibility, (at least attempted) manipulation of populations, and drastic economic shifts are pretty much here already and certain to cause harm.
@GerardThornley @theorangetheme @mttaggart yes! as well as the problems/biases inherent in the training material or in the ways that it's trained
@aliide @theorangetheme @mttaggart right!? So the biases get embedded in their black box, and all they can say is "sorry, the computer says no", and no-one can question it because no-one really understands it.