I'm consistently amazed that research and journalism on this subject never ask about the public's perception of the accuracy of these techniques. The actual accuracy is irrelevant if they are widely believed to be revealing truth.

Fake news doesn't actually have to be fake, it only has to be perceived as fake. Imagine a politician claiming their rival was identified by some AI as actually being behind a notorious, pseudonymous Internet troll. The claim could be total bullshit thrown just so that it has to be refuted. If the tools are merely perceived to be accurate, then refutation is much harder.

This may be irrelevant to some extent as the techniques become more accurate. (The results could be more easily replicated by others.) Still, it's pretty important while we're transitioning to that future. It's also important as long as these techniques remain imperfect and applied to huge numbers. A tiny error rate matters against a huge population, but public perception regularly fails to understand or account for this.

https://arstechnica.com/security/2026/03/llms-can-unmask-pseudonymous-users-at-scale-with-surprising-accuracy/

LLMs can unmask pseudonymous users at scale with surprising accuracy

Pseudonymity has never been perfect for preserving privacy. Soon it may be pointless.

Ars Technica