| Science-Fiction/Fantasy Writing | https://angus.pw |
| Photography & More | https://raingod.com |
| Travel/Photo Blog | https://disoriented.net |
| Languages | en (native) / fr, it (fluent) / es, nl (some) |
| Science-Fiction/Fantasy Writing | https://angus.pw |
| Photography & More | https://raingod.com |
| Travel/Photo Blog | https://disoriented.net |
| Languages | en (native) / fr, it (fluent) / es, nl (some) |
For #caturday, here's a comfortable kitty taking a nap in the gardens of the Kasbah of the Udayas, in Rabat, Morocco.
It's also the case that the more untrustworthy LLM output becomes, the harder the people who have invested hundreds of billions in the tech will try to convince us that we must Trust the Superintelligent Machine That Knows Everything, and, indeed, to cut us off from competing knowledge sources. So we have that to look forward to.
Anyway, TL;DR: artificial gullibility is a problem that's only going to get worse, so brace yourselves.
/END
I once described the US as a complex distributed system with an attack surface of 300 million people. Gullible LLMs are a new vector for attacking that system, one that targets the weakest links in the chain, the people who don't know enough not to distrust those handy-dandy “AI Overview" boxes in their favorite search engine.
7/
LLMs are essentially gullible. And many people, even otherwise smart people, are gullible enough to believe that "AI" distillations of facts are trustworthy. It's a problem of gullibility compounded. But there's also an entire industry that's devoted to trying to convince us NOT to be skeptical of AI, not to see it for what it is -- an often-naive statistical model that can and will increasingly be gamed by bad actors.
6/
For instance, US government websites have presumably long been regarded as reliable, and given additional weight. That's emphatically no longer the case, when government sites are publishing propaganda, promoting pseudoscience, & suppressing or rewriting history.
Our deference to the presumed authority and impartiality of government communiques or 'serious’ news media is itself a problem, of course, but it's one that is multiplied a hundredfold by LLM regurgitation.
5/
Model training often weights certain sources as more authoritative than others, so volume isn't the only thing that counts, and that weighting is reflected in the model. But what happens when "authoritative” sources are themselves biased?
4/