Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes

https://lemmy.world/post/43790658

Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes - Lemmy.World

Lemmy

I’m not taking all the credit but I do hope those people who didn’t believe me in the past could rightfully take this comment, print it, pull down their pants and shove it up their ass.

It’s time to hold journalism with a higher standard and this idea that “well they do alright” and “it was only once” is bullshit sliding into madness.

Just the facts, folks.

The problem with your attitude towards this is that these companies are forcing “AI” down everyone’s throat. It’s a requirement now to churn out more bullshit than humanly possible.

This person was simply fired because they didn’t catch the false information,not because they used the tools forced upon them.

Sifting through information to find out what’s true and what’s not, before presenting it to the public, is a pretty crucial task and ability for an actual journalist though. It is probably one of the most important parts of their job to verify the correctness of their sources and what they write.

Then maybe they shouldn’t be using these tools in the first place. Other Conde Nast employees have already been blowing the whistle about this, which is funny because they used all the AI companies for stealing content.

Whether there is a news article about it or not, these shitty tools are being shoved down everyone’s throats. From developers, to authors.

Then maybe they shouldn’t be using these tools in the first place

I absolutely agree, they should not write articles with LLMs. I’m just saying they’re not absolved of basic journalistic responsibility because they’re instructed to use LLM tools.

You’re absolutely correct. But the problem is bigger than the rogue journalist. Separation of duties is a well known requirement for robust, reliable processes immune to single points of failure (whether malicious or, as I suspect in this case, merely grossly negligent and irresponsible). It is necessary but not sufficient to hold just the journalist who used AI responsible for the publication of false statements.
Separation of duties - Wikipedia

The problem here is you are both characterizing Ars as you would other companies that have these AI mandates. Ars is the opposite, they have a mandate NOT to use AI.

While I agree a separation of responsibilities is important, they had two coauthors for exactly that reason. One trusted the other for the references, not knowing that they used AI.

Either way, the initial comment is certainly not “absolutely correct” when it comes to Ars.

I don’t work at Ars, and maybe you know something I don’t, but I have seen nothing to suggest that they’re one of the companies doing that. It seems like they are pretty open about how they do not allow AI to be used in the process. Have they said something to indicate otherwise and I just misssed it?
Absolutely not. Ars has a no AI policy, it’s the exact opposite. Guessing you are a nice little bot.

A fucking moron who runs around calling everything a bit when you disagree with whatever the topic is.

It’s the new CyberTruck of online insecurity.

Hope that’s “good” for you.

Main character moment.

and “it was only once” is bullshit

They checked and then fired the author. I don’t see how this is “it was only once” implying nothing changed and it will happen again. Isn’t firing the author “holding journalism to a higher standard” already, which you ask for?

Maybe they should do more than just fire a person who was caught using AI. Maybe they should establish a process of independent fact checking before publication, regardless of whether AI was known or intended to be used to produce the article. It is a problem that AI was used in a way that introduced factual errors. It’s fair that the person responsible for this was fired. But all processes need quality control. Why hasn’t the person who failed to wrap quality control processes around the author fired?
in what world would independent fact checking down to the level of individual quotes be feasible for an online magazine? you can’t be serious.
That’s part of the cost of AI that the AI companies leave to their customers. There is a tradeoff and we know from a long history of for-profit corporate behaviour that they will generally prefer lower short term cost, despite consequent risk and harm. But if the companies that sell AI services don’t take care to ensure the outputs are true and the companies that use AI don’t take care then that leaves the ultimate customer/consumer to fact check everything. That or simply be oblivious or stop trusting anything. The problem is made worse by the fact that most companies won’t disclose their use of AI, because of the adverse impact on their reputation, unless they are compelled to do so. So far, I don’t see any legislation to compel disclosure.
That used to be the standard…

I highly doubt that. how would that even work? a third-party to the publisher would have to check every statement before the issue goes to print. I can’t imagine this happening for anything that is not research papers or official reports.

but I happy to learn something new.

This can and should be done internally. Why would it need to be a third party? Any publisher that cares about their reputation anyway. Fact-checkers are a real thing. They routinely follow up on interviews to make sure authors aren’t bullshitting.
of course, but the OP said independent
I read that as: someone other than the author.
Key is, used to be. Ars Technica is one of the best such magazines out there, but even their margins have to be razor thin. To stay at the top of Google search results you have to update super frequently. (Source: this Metafilter post: metafilter.com/…/Ars-Technica-Pulls-AI-Article-Wi…)
AI - damned if you do and damned if you don’t. And it’s not just journalism affected.
I have yet to see a field where LLMs are a net positive. At best scammers can dupe people easier and faster than ever but between writing, programming, etc the avg productivity gain is typically negligible at best to achieve work of similar quality with or without LLMs.

It is useful in some specific fields like protein folding:

www.nature.com/articles/s41586-021-03819-2

The problem is people think it can replace people which is wrong, it is a tool and should be used as such not as a replacement.

Highly accurate protein structure prediction with AlphaFold - Nature

AlphaFold predicts protein structures with an accuracy competitive with experimental structures in the majority of cases using a novel deep learning architecture.

Nature
Those aren’t LLMs.
Oh your right my mistake. I guess unit testing and debugging are useful. I did use copilot to find a missing slash. Also useful for revising email and paragraphs, of course you have to review it. It also should never be used for scientific research and journalism.

Or, you know, double-check that the quotes given to you by the experimental AI “quote extractor” tool are accurate?

He is (was) their go-to AI reporter. It’s not like they handed the assignment to an intern and said “go nuts.”

And the article was about AI fabricating an attack on a developer that rejected its PR.

The whole point of using AI is that its a search tool and that is the verification.

Otherwise there’s no point in using it.

And you can guarantee Conde Nast demands journalists use AI all the time.

Controversy… What controversy? It sounds more like blatant journalistic malpractice
A few years ago, blatant journalistic malpractice was a controversy.
That’s why he was fired
The article says “controversy” as of this is some cancel culture crap.
When I suggested he be fired on another thread I received several responses saying “he made a mistake” and “he was sick”, and many downvotes in return.

Amazing. Just great.

Imagine being confronted for lying and just going “hey it was an accident okay I didn’t MEAN to decieve people, I just used the machine known for deceiving people and willingly put my name on its deceptions and it deceived people!” and having people defend you.

Actually, he completely admitted to and took full responsibility for his mistake; at no point did he offer an excuse, only an explanation.

To the extent I was defending him, it was because people insisted on painting him in the worst possible light, and on misinterpreting his explanation as an excuse, not because I think that everything that he did was okay.

You do have a point, after reading the article. That’s a bit embarrassing, honestly.

The comments here around this were so… Off. I guess nothing was certain, but we were supposed to believe that the author was too sick to write an article, but also writing an article and using an AI “tool” at the same time.

Hindsight is 20/20, but popular defenses at the time were

He wrote the article himself, he just got mixed up when experimenting with using an AI tool to help him extract quotes from a blog entry. (He is the head AI writer, so learning about these tools is his job.) It was nonetheless his failure to check the quotes he was copying from his note to make sure that he got them right… but an important bit of context is that he had COVID while doing all this.

You know that the writer himself is quoted in the OP article, right?

Yes…? I saw his comments weeks ago, and smelled something off about the… and apparently Ars determined they were lacking.

And now, “Edwards said he was unable to comment at this time.”

If he had Covid, then why was he working?

Sick time/PTO is a treasured resource here in the US. You don’t waste what little you might have on a silly thing like covid…

/s

I was the one who wrote that comment, and it was not an attempt to excuse all of his actions but a response to the following comment:

Someone deserves to be fired. Just imagine you’re paying someone to do a job and they just 100% completely outsource it to a machine in 5 seconds and then goes home.

Here is the full comment that I wrote, including the part you snipped off at the end:

He wrote the article himself, he just got mixed up when experimenting with using an AI tool to help him extract quotes from a blog entry. (He is the head AI writer, so learning about these tools is his job.) It was nonetheless his failure to check the quotes he was copying from his note to make sure that he got them right… but an important bit of context is that he had COVID while doing all this. Now, arguably he should have taken sick time off instead of trying to work through it (as he admits), but this would have cost him vacation time, and the fact that he even was forced into making this choice is a systemic problem that is not being sufficiently acknowledged.

ArsTechnica's response to the AI generated "quotations"

Backstory here: https://www.404media.co/ars-technica-pulls-article-with-ai-fabricated-quotes-about-ai-generated-article/ Personally I think this i…

I did not downvote you—my instance does not allow or show downvotes, which is really nice!—but he was sick, and he did make a mistake, and him being fired does not make either of those things false.

Also, a ton of people were piling on him in that thread, so you had plenty of company in calling him to be fired.

but he was sick, and he did make a mistake, and him being fired does not make either of those things false.

No, but those things also do not excuse his actions, which is why I said he should be, and ultimately was, fired. And I think that’s a positive thing.

Also, a ton of people were piling on him in that thread, so you had plenty of company in calling him to be fired.

The point is, plenty of people were downvoting me and defending him (such as yourself), which is what made it “controversial”. I was explaining this to the person who was confused as to why it was controversial.

I agree that these things do not excuse his actions, but there was a tendency in that thread to paint him in the worst possible light, which I felt was uncalled for.

I am said to have seen him be fired from Ars because I think there were mitigating circumstances—it is troubling that he felt the need to work while sick!—but on the other hand, given how badly he violated the trust placed in him, it is hard to see how Ars could have made any other choice.

Moreso than violating the trust placed in him is violating the trust readers put into the Ars publication.
I agree, that is a better way of putting it.
As they should
Why are we blaming AI here instead of the journalist?
I mean they fired the guy, and the guy took full responsibility for the errors. If that’s not blaming the journalist, I don’t know what is.
Tbf, I didn’t read the article. But the title mentions “controversy.” Also are people so lazy they can’t make up their own fake quotes? Was AI really needed here?
Are people so lazy they can’t even bother to read the headline? Maybe an AI would’ve been useful here to generate its own defense.
Being too lazy to read is one thing, not being too lazy to then comment is a whole other kind of existence.

Tbf, I didn’t read the article. But the title…

Say no more. Please

Obviously the use of a LLM was a terrible decision, but I think in this context we can also blame some country’s lack of sick pay.
Whoa. There are actually consequences? ArsTechnica is actually sorry??

No, the worker was fired and the executive whose job title is making sure that the work submitted is correct was not fired.

The executives will get a bonus this year.

The executives will get a bonus this year.

well of course! they just saved a lot of money on wages, they deserve it!