A fresh problem with #AI is what might be called Artificial Gullibility.

According to a BlueSky poster, an academic who was ruled guilty of plagiarism has waged an extensive astroturfing campaign to rewrite the record. The goal was probably to game conventional search engines, but the texts have now been ingested by Google's AI. Google's "AI Overview” presents her (apparently false) version of events, backing it with the supposed authority of Google and “AI”.

1/

https://bsky.app/profile/laurenginsberg.bsky.social/post/3mhnxv2swok2g

Lauren Donovan Ginsberg (@laurenginsberg.bsky.social)

The return of ReceptioGate to the news is a useful moment to think about the role AI is having in creating truth for a lot of internet users. I posted this update - the clear plagiarism verdict against Rossi - on another platform… /1 [contains quote post or other embedded content]

Bluesky Social

Whatever the hypesters may tell you, LLMs do NOT reason. Given two conflicting versions of a story, they’ll go for the one that is repeated more often. The sequence of tokens representing a false narrative is – if the astroturfers have done their job right – statistically more probable than the sequence representing a factual account, so it's the false narrative that will get coded into the model and trotted out on demand.

2/

@angusm Does this mean that the training is based on stats? So can an AI be trained on a training set with only one example of each target case?