What's going on here? The matplotlib maintainer this story is about correctly notes that all the quotes from his post in the article are made up.
UPDATE: Link was pulled; see below.
What's going on here? The matplotlib maintainer this story is about correctly notes that all the quotes from his post in the article are made up.
UPDATE: Link was pulled; see below.
This scoop brought to you by the TTI Intel Feed, which also routinely beats commercial threat intel to the punch on important emerging threats.
Hi folks, Since Ars is apparently posting partially or fully AI generated articles now, I have to ask - is this going to be a continued policy going forward? That is, will Ars be officially publishing AI generated content from now on? If so, will it be marked? This is obviously pretty concerning.
These were pulled too, but thank you again Wayback:
The final chapter? The statement from Ars:
On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.
Not quite the final chapter! Benj Edwards has taken responsiblity in this Bluesky post:
https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
For those who won't head over there, a summary:
First, this happened while sick with COVID. Second, Edwards claims this was a new experiment using Claude Code to extract source material. Claude refused to process the blog post (because Shambaugh mentions harassment). Edwards then took the blog post text and pasted it into ChatGPT, which evidently is the source of the fictitious quotes. Edwards takes full responsibility and apologizes, recognizing the irony of an AI reporter falling prey to this kind of mistake.

Sorry all this is my fault; and speculation has grown worse because I have been sick in bed with a high fever and unable to reliably address it (still am sick) I was told by management not to comment until they did. Here is my statement in images below https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/
You'd hope that an AI reporter would know that you cannot trust an LLM to summarize or search for information, but apparently not.