177 Followers
737 Following
658 Posts

Technical Analyst / Pentester @usdAG.

Pwning #LLM for fun (and sometimes profit).

I try to maintain a high signal-to-noise ratio, here.

#infosec #hacking #reverseengineering #privacy

Bloghttps://jfkimmes.eu
Forge/Githttps://codeberg.org/jfkimmes
Matrixhttps://matrix.to/#/@jfkimmes:hackingfor.eu
E-Mail[email protected]

Pokémon Go players thought they were catching Pikachus.

They were actually building the nervous system for robot civilization.

500M humans. 30B images. Zero consent forms.

The game was the harvest.
https://www.technologyreview.com/2026/03/10/1134099/how-pokemon-go-is-helping-robots-deliver-pizza-on-time/

How Pokémon Go is giving delivery robots an inch-perfect view of the world

Exclusive: Niantic's AI spinout is training a new world model using 30 billion images of urban landmarks crowdsourced from players.

MIT Technology Review
Mass surveillance and censorship are escalating in many countries right now. There is a global attack on secure encrypted communication. Often, authorities, politicians, and tech companies work together to push for new laws. One example: when Ashton Kutcher (yes, the actor), through his tech company Thorn, tried to introduce total surveillance of all EU citizens through undemocratic and corrupt methods.

Your package manager's D-Bus interface is root-privileged, always-on, and crashes instantly if you whisper the wrong locale at it.

CVE-2026-3836.
CVSS 7.5.
No auth required.

The tool patching your system was the hole. Upgrade dnf5 now.
https://portallinuxferramentas.blogspot.com/2026/03/critical-fedora-42-update-analyzing-cve.html?m=1

Critical Fedora 42 Update: Analyzing CVE-2026-3836 and the dnf5 D-Bus Vulnerability Patch

Blog com notícias sobre, Linux, Android, Segurança , etc

As someone who’s been maintaining FOSS projects of various levels of popularity for more than a decade, I need y’all to understand one thing: LLMs didn’t change the median PR quality. (1/6)

🤔 𝗗𝗶𝗱 𝘆𝗼𝘂𝗿 𝗟𝗟𝗠 𝗿𝗲𝗮𝗹𝗹𝘆 𝗳𝗼𝗿𝗴𝗲𝘁 𝘆𝗼𝘂𝗿 𝗽𝗿𝗶𝘃𝗮𝘁𝗲 𝗱𝗮𝘁𝗮 - 𝗼𝗿 𝗷𝘂𝘀𝘁 𝗴𝗲𝘁 𝗾𝘂𝗶𝗲𝘁𝗲𝗿 𝗮𝗯𝗼𝘂𝘁 𝗶𝘁?

🧠 Machine unlearning aims to remove specific training data (e.g., private info) without full retraining. But does it actually 𝗿𝗲𝗺𝗼𝘃𝗲 𝘁𝗵𝗲 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹?

⚠️ 𝗪𝗵𝗮𝘁 𝘄𝗲 𝘀𝗵𝗼𝘄: many unlearning methods are shallow.
Outputs change, 𝘆𝗲𝘁 𝘀𝗲𝗻𝘀𝗶𝘁𝗶𝘃𝗲 𝗱𝗮𝘁𝗮 𝘀𝘁𝗮𝘆𝘀 𝗹𝗶𝗻𝗲𝗮𝗿𝗹𝘆 𝗱𝗲𝗰𝗼𝗱𝗮𝗯𝗹𝗲 from internal representations.

> It’s about maintaining enough technical competence that you are a participant in the systems you depend on rather than a permanent subject of them.

https://fireborn.mataroa.blog/blog/the-slow-death-of-the-power-user/

Legitimately one the best blog post on this concept I've read this year. I have some critiques, but most of them are rooted in nitpicks and my team spent in pentest consulting companies in the US.

The Slow Death of the Power User — fireborn

@ariadne I have no idea how this would work with OpenClaw though, sorry.
@ariadne you could build a tool that gets called to generate answers / responses by your trained model. Then qwen-35 could handle the reasoning and make its tool calls and finally generate responses / text by copying from a tool call to your wrapper.
@ariadne In any case: as long as the final response is generated by your trained model it will never make a valid tool call since there are probably about zero training examples of the necessary JSON structure required by the tool handling in your furry smut (this is an estimate that could be quite the way off knowing the furry community but still)
@ariadne Oh, is that a OpenClaw specific feature where you can specify that reasoning traces are generated by a separate model than the actual response? I'm not really familiar with OpenClaw's internals.