Angus McIntyre

@angusm
1.8K Followers
148 Following
4.1K Posts
I play with words, cameras & computers
Science-Fiction/Fantasy Writinghttps://angus.pw
Photography & Morehttps://raingod.com
Travel/Photo Bloghttps://disoriented.net
Languagesen (native) / fr, it (fluent) / es, nl (some)

It's also the case that the more untrustworthy LLM output becomes, the harder the people who have invested hundreds of billions in the tech will try to convince us that we must Trust the Superintelligent Machine That Knows Everything, and, indeed, to cut us off from competing knowledge sources. So we have that to look forward to.

Anyway, TL;DR: artificial gullibility is a problem that's only going to get worse, so brace yourselves.

/END

I once described the US as a complex distributed system with an attack surface of 300 million people. Gullible LLMs are a new vector for attacking that system, one that targets the weakest links in the chain, the people who don't know enough not to distrust those handy-dandy “AI Overview" boxes in their favorite search engine.

7/

LLMs are essentially gullible. And many people, even otherwise smart people, are gullible enough to believe that "AI" distillations of facts are trustworthy. It's a problem of gullibility compounded. But there's also an entire industry that's devoted to trying to convince us NOT to be skeptical of AI, not to see it for what it is -- an often-naive statistical model that can and will increasingly be gamed by bad actors.

6/

For instance, US government websites have presumably long been regarded as reliable, and given additional weight. That's emphatically no longer the case, when government sites are publishing propaganda, promoting pseudoscience, & suppressing or rewriting history.

Our deference to the presumed authority and impartiality of government communiques or 'serious’ news media is itself a problem, of course, but it's one that is multiplied a hundredfold by LLM regurgitation.

5/

Model training often weights certain sources as more authoritative than others, so volume isn't the only thing that counts, and that weighting is reflected in the model. But what happens when "authoritative” sources are themselves biased?

4/

Imagine the opportunities for people pushing pseudoscience like Creationism or vaccine denial, or political propaganda, or corporate FUD.

In some ways, it's an extension of conventional SEO, which has always aimed to "put your story first", but now the untruths are delivered with the authority of "AI" (argumentum ab roboto), not just on search results pages, but in any other context where a naive user interacts with an LLM, e.g. with a chatbot.

3/

Whatever the hypesters may tell you, LLMs do NOT reason. Given two conflicting versions of a story, they’ll go for the one that is repeated more often. The sequence of tokens representing a false narrative is – if the astroturfers have done their job right – statistically more probable than the sequence representing a factual account, so it's the false narrative that will get coded into the model and trotted out on demand.

2/

A fresh problem with #AI is what might be called Artificial Gullibility.

According to a BlueSky poster, an academic who was ruled guilty of plagiarism has waged an extensive astroturfing campaign to rewrite the record. The goal was probably to game conventional search engines, but the texts have now been ingested by Google's AI. Google's "AI Overview” presents her (apparently false) version of events, backing it with the supposed authority of Google and “AI”.

1/

https://bsky.app/profile/laurenginsberg.bsky.social/post/3mhnxv2swok2g

Lauren Donovan Ginsberg (@laurenginsberg.bsky.social)

The return of ReceptioGate to the news is a useful moment to think about the role AI is having in creating truth for a lot of internet users. I posted this update - the clear plagiarism verdict against Rossi - on another platform… /1 [contains quote post or other embedded content]

Bluesky Social
To judge by the weight of the new MacBook I’ve just been issued by my employer, Apple have switched from titanium to using depleted uranium for their laptop bodies.
ACOUP is grimly pessimistic about the Iran war: "it is not possible for two sides to both win a war. But it is absolutely possible for both sides to lose; mutual ruin is an option." https://acoup.blog/2026/03/25/miscellanea-the-war-in-iran/
Miscellanea: The War in Iran

This post is a set of my observations on the current war in Iran and my thoughts on the broader strategic implications. I am not, of course, an expert on the region nor do I have access to any spec…

A Collection of Unmitigated Pedantry