Alexander Loth

@xlth
24 Followers
23 Following
119 Posts
Researcher exploring how generative AI reshapes disinformation & public trust. Building JudgeGPT · Author of books on data visualization & AI · iOS developer (Trackless Links, Mindful Coffee) · Views my own.
🌐 Websitehttps://alexloth.com
🎓 Google Scholarhttps://scholar.google.com/citations?user=ofZZ8LgAAAAJ
📚 Bookshttps://alexloth.com/books
🔗 Linkshttps://linktr.ee/xlth

Atrani. Population 800, smallest municipality in southern Italy. No tour buses, no crowds — just a bell tower, two majolica domes, and the Tyrrhenian Sea.

📍 Amalfi Coast

#Italy #AmalfiCoast #Travel #Photography

Can we steer visual representations like we prompt LLMs? This paper shows how to inject text into vision encoders via early fusion, creating steerable features that stay strong for core vision tasks while focusing on any concept you ask for.

Read the full paper: http://arxiv.org/abs/2604.02327v1

Most fake news generators still spit out text you can’t reproduce or compare.
RogueGPT ships a controlled pipeline - multi-model, multilingual, style-locked news with full provenance - beyond what GROVER or FACTGEN allowed.
If we can generate disinfo this precisely, how should evaluation tools change?

Full paper here: https://github.com/aloth/RogueGPT

People think they can spot AI-written news. Turns out they mostly can’t.
In a large human study, GPT-4 news was judged about as authentic as real journalism, with accuracy hovering near chance.
If readers can’t tell, what happens to trust when anyone can publish at scale?

Full paper here: https://github.com/aloth/JudgeGPT

We still verify images by squinting at pixels and vibes.
Origin Lens does on-device cryptographic C2PA verification, showing who signed an image and if it was altered.
When trust is math instead of guesswork, which would you rely on?

Try it out on iOS: https://apps.apple.com/us/app/origin-lens/id6756628121

Most fake news generators chase realism, not experimental control - and that breaks research.
Reproducible, multi-model, multilingual news stimuli with full provenance to study how the truth-default erodes.
If you could dial style, language, and model exactly, what would you measure first?

Full paper here: https://github.com/aloth/RogueGPT

Disinformation isn’t cottage-scale anymore - it’s becoming industrialized.
JudgeGPT studies whether humans can still spot AI-generated news when deception is mass-produced by models.
If we can’t tell real journalism from synthetic text, what happens to public trust?

Read more: https://judgegpt.streamlit.app

New result: Multilevel Euler–Maruyama gives a polynomial speedup for diffusion sampling. By mixing cheap and expensive UNets, you can sample at roughly the cost of a single large model eval, with theory and experiments backing it.

Read the full paper: http://arxiv.org/abs/2603.24594v1

Remember the fake Pentagon explosion photo that briefly moved markets?
An on-device verifier could flag it as AI-made by checking content credentials, signatures, and AI markers before sharing.
Would you trust an image more if you could see its full edit history?

Check it out: https://arxiv.org/abs/2602.03423

This paper shows chain-of-thought faithfulness isn’t a single objective number. On the same data, different classifiers shift scores by up to 30 points and even reverse model rankings. Measurement choice matters more than we admit.

Read the full paper: http://arxiv.org/abs/2603.20172v1