What's going on here? The matplotlib maintainer this story is about correctly notes that all the quotes from his post in the article are made up.
UPDATE: Link was pulled; see below.
What's going on here? The matplotlib maintainer this story is about correctly notes that all the quotes from his post in the article are made up.
UPDATE: Link was pulled; see below.
This scoop brought to you by the TTI Intel Feed, which also routinely beats commercial threat intel to the punch on important emerging threats.
Hi folks, Since Ars is apparently posting partially or fully AI generated articles now, I have to ask - is this going to be a continued policy going forward? That is, will Ars be officially publishing AI generated content from now on? If so, will it be marked? This is obviously pretty concerning.
These were pulled too, but thank you again Wayback:
The final chapter? The statement from Ars:
On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.
Not quite the final chapter! Benj Edwards has taken responsiblity in this Bluesky post:
https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
For those who won't head over there, a summary:
First, this happened while sick with COVID. Second, Edwards claims this was a new experiment using Claude Code to extract source material. Claude refused to process the blog post (because Shambaugh mentions harassment). Edwards then took the blog post text and pasted it into ChatGPT, which evidently is the source of the fictitious quotes. Edwards takes full responsibility and apologizes, recognizing the irony of an AI reporter falling prey to this kind of mistake.

Sorry all this is my fault; and speculation has grown worse because I have been sick in bed with a high fever and unable to reliably address it (still am sick) I was told by management not to comment until they did. Here is my statement in images below https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/
You'd hope that an AI reporter would know that you cannot trust an LLM to summarize or search for information, but apparently not.
Good. No quibbling, just taking responsibility with transparency.
@art_codesmith @tankgrrl @mttaggart they have enough information already to justify immediately yanking the article, so "we'll tell you next week" scans to me as "we need to figure out the PR angle on this" more than "we need to find out what happened".
Maybe their explanation will be a good one, but I'm not holding my breath.
@GerardThornley @theorangetheme @mttaggart
I think the pressure on SWEs is *considerably* higher given the institutional pressure to make AI work! I'm sure many engineers want to write original code too, though.
@GerardThornley @theorangetheme @mttaggart
A bigger problem with AI and journalism is probably the volume of writers who have written about it who don't seem to understand that the fearmongering about it is part of the hype. Like the "AI could take over the world" spiel is definitely perpetuated by companies and individuals that want it venerated as some out-of-control force beyond human comprehension. It's just another side of the AI hype coin.
may have to eat my words...
Seems like it very much was the consequence of writers using AI ..!
Edit: or potentially an editor, would be good if they specified which — and either way, it slipped through the editorial process.
@mttaggart same Ars that let this article hit the front page years back?