8.4K Followers
901 Following
11K Posts

Displaced Philly boy. Threat hunter. Educator.  Executive Director. #infosec, #programming #rust , #python  #haskell , and #javascript . #opensource advocate. General in the AI Resistance. Runs @thetaggartinstitute. Made https://wtfbins.wtf. Not your bro. All opinions my own. Dad. #fedi22 #searchable

Pronouns: He/him.

The Taggart Institutehttps://taggartinstitute.org
Bloghttps://taggart-tech.com
Codeberghttps://codeberg.org/mttaggart
YouTubehttps://youtube.com/taggarttech
GitHubhttps://github.com/mttaggart
Keyoxideaspe:keyoxide.org:G4ADJFWICZZZXGR4STZQVMBJNM

#Vercel update. We now know, thanks to Vercel's CEO, that the compromise came by way of the context[.]ai Office Suite, using OAuth tokens collected from a breach last month. Details here:

https://discourse.ifin.network/t/vercel-confirms-breach-as-hackers-claim-to-be-selling-stolen-data/293/6

Vercel confirms breach as hackers claim to be selling stolen data

We now know that the compromised app was context.ai: Several important takeaways in this update from Vercel CEO Guillermo Rauch: The attacker moved from a compromised Google Workspace account to other Vercel infrastructure. The attacker had access to “non-sensitive” environment variables, which are not encrypted at rest. The attacker had access to these. Vercel is still claiming that only a “quite limited” set of users was impacted. Unclear why that’s so. Customers known to be impacted are ...

IFIN

After a long weekend, I've finally updated https://publickey.directory to reflect the current state of affairs for the Public Key Directory which brings Key Transparency to the Fediverse, as part of the effort to build End-to-End Encryption (E2EE) for ActivityPub.

This project now supports* Post-Quantum Cryptography! (We're shipping ML-DSA-44 now and will consider new algorithms in the future.) HPKE also uses mlkem768x25519 (a.k.a. X-Wing).

* The only part that doesn't currently require post-quantum cryptography is RFC 9421 (HTTP Message Signatures), because no one has bothered to specify an IANA codepoint for it yet. I'm planning to write a C2SP spec soon if no one beats me to it. For the interim, Ed25519 is still allowed there, but in v2 I plan to drop it.

Public Key Directory - Key Transparency for the Fediverse

"379 zero-days from an orchestrated pipeline that beat unconstrained Claude Code by 30x" ...

"Three projects produced zero confirmed vulnerabilities: curl ... OpenSSL ... and SQLite"

https://theweatherreport.ai/posts/symbolic-execution-and-llms/

379 zero-days from an orchestrated pipeline that beat unconstrained Claude Code by 30x

SAILOR found 379 zero-days by orchestrating CodeQL, LLMs, and KLEE. It can supercharge Mythos.

The Weather Report

#Vercel customers: don't wait. Proactively rotate keys, passwords and environment variables ASAP.

https://vercel.com/kb/bulletin/vercel-april-2026-security-incident

Vercel April 2026 security incident | Vercel Knowledge Base

We’ve identified a security incident that involved unauthorized access to certain internal Vercel systems.

Vercel April 2026 security incident | Vercel Knowledge Base

We’ve identified a security incident that involved unauthorized access to certain internal Vercel systems.

Useful explainer on the latest Citrix shenanigans, including verifying exposure and hunting/forensics recommendations

https://www.picussecurity.com/resource/blog/cve-2026-3055-cve-2026-4368-inside-the-netscaler-citrixbleed-3-memory-overread

CVE-2026-3055 & CVE-2026-4368: Inside the NetScaler "CitrixBleed 3" Memory Overread

CitrixBleed 3 explained: CVE-2026-3055 (CVSS 9.3) leaks NetScaler memory via /saml/login and /wsfed/passive. Exploit chain, detection, and patch guide.

To be 100% clear: I think both positions are in error. We have decades of evidence to suggest that theoretically being able to write memory-safe code with C/C++ does not prevent dangerous bugs in production code. The core design of the tool does not lend itself to safety, and now we know that organizations either can't or won't put in the requisite work to make it safe.

We have every reason to expect the same of LLMs. Even if guardrails mature around the models, and even if code correctness—such as it is, based on insecure code in the training corpora—improves dramatically, the requisite safety apparatus will yet again be ignored, deferred, downplayed by organizations who see security as a barrier to shipping. And because the tool in and of itself does not tend toward safety, its manifestations rarely will.

An engineer designs an oscillating fan that spins quickly and moves a lot of air. A designer puts a cage around it so nobody gets mauled by the expertly-engineered machine.

In this realm, we think too little of design.

To be 100% clear: I think both positions are in error. We have decades of evidence to suggest that theoretically being able to write memory-safe code with C/C++ does not prevent dangerous bugs in production code. The core design of the tool does not lend itself to safety, and now we know that organizations either can't or won't put in the requisite work to make it safe.

We have every reason to expect the same of LLMs. Even if guardrails mature around the models, and even if code correctness—such as it is, based on insecure code in the training corpora—improves dramatically, the requisite safety apparatus will yet again be ignored, deferred, downplayed by organizations who see security as a barrier to shipping. And because the tool in and of itself does not tend toward safety, its manifestations rarely will.

An engineer designs an oscillating fan that spins quickly and moves a lot of air. A designer puts a cage around it so nobody gets mauled by the expertly-engineered machine.

In this realm, we think too little of design.

@nakal @realn2s So, let's assume for the moment that the developers who are creating real applications with these tools are not suffering a mass delusion, and that they are not "useless." But indeed, that those developers are using "advanced toolchains" to ensure model output meets certain standards.

Do you see the congruence when they say of these toolchains, "You just need to know and use them?"

@delta_vee Okay we are talking about two different things. I am not talking about prompt injections, which is absolutely an unsolvable problem from a structural perspective. There are, however, controls available about what context is added—but again, discipline is the issue.

As for generated code correctness, discipline of process looks fairly similar to what you'd want in a normal SDLC, but refactored for agentic work. Yet again, the trick is discipline. Any shortcut, any change without verification/attestation, anything unreviewed for security, becomes a failure point. Which brings us back around to the original thesis: if it's theoretically possible to do it right, but practically impossible, then you have an unmitigated liability.