LLMs can unmask pseudonymous users at scale with surprising accuracy
Pseudonymity has never been perfect for preserving privacy. Soon it may be pointless.
https://arstechnica.com/security/2026/03/llms-can-unmask-pseudonymous-users-at-scale-with-surprising-accuracy/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social
@arstechnica That's actually old news: Same was reported in 2008 already simply for social graphs. Oh, I see: The article cites it. So this is cold tea served with AI flavor?
@taschenorakel @arstechnica I don't think this is anything new - AI or not, people should know better than to publish details of their personal life online
@taschenorakel @arstechnica Yes, obviously, LLMs can't magically find who people are so if they're able to unmask pseudonyms, the necessary data is already out there. The main thing here is that it is made a lot more accessible, as people no longer need the know-how, they just need to ask questions.
@arstechnica One wonders if having a particular writing-style and/or vocabulary-level positively or adversely impacts LLMs' ability to unmask the writer.
@arstechnica
Another reason to not let Palantir anywhere near NHS data; anonymised or not!