Ars Technica Fires Reporter Over AI-Generated Quotes
Ars Technica Fires Reporter Over AI-Generated Quotes
It’s not quite like that. The tools used to scrape the web for training data couldn’t access the site to stacks the data, so it’s not encoded in the model.
The query interface for the model just hallucinates when there’s a ‘vacuum’.
It doesn’t say something like that specifically because it isn’t an algorithm that receives x input and spits out Y. It’s an algorithm that receives x query and spits out the most common variant worf that comes after query. If there isn’t a most common word that makes sense to a human, the AI doesn’t know that and so it still gives the most common word in its training set.
If the query is “Juicy” it may output melons. If melons were not available in its training set it might output grapes or cherries, but if those weren’t available it might output apple bottom jeans which would have made sense in 2003 but likely wouldn’t make sense to the average kid today who’s never heard of juicy couture.
It doesn’t understand anything. It can’t reason.
His blog has AI scraping protections enabled,
Tell me more.
theshamblog.com/an-ai-agent-published-a-hit-piece…
It’s all in here.
That’s just one explanation among many. A more reasonable guess is that the Ars writer went to his webpage, then asked an AI extension, which would have total access to an open tabs, to pull out quotes or something similar. LLMs find it hard to not change text, even when instructed to.
There are more egregious examples of the author overestimating AI on the same blog post…
I think he’s more a libality at this point.
Not caring for if something is true ("reckless regard for the truth"), opens up libel lawsuits.
You cant just publish made up quotes on reporting a virtual hit piece on someone’s reputation.