So, wait, the whole “Mythos AI is so powerful it can find exploits in any software” thing requires both access to the source code and thousands of runs to find anything remotely actionable? This is the “too dangerous to release” model they’ve been hyping up?

Is that really it?

@baldur

Idk what 0-day exploits are going for these days, but from what I recall it could be north of a million USD depending on the scope and impact.

In comparison: spending 10k USD to find a 0-day RCE in a popular open source program seems like a bargain. I think it's less about the efficiency of the system and more about: "What are the odds an attacker with a credit card could make this your problem?"

@baldur

Like, I'd really like to point people at this:

https://toot.yosh.is/@yosh/116376054778890780

Anyone saying stuff like "oh well a fuzzer would have found that" is wish casting. Sure, these things will find the obvious lowest hanging fruit first. But they can also find sandbox escapes in formally verified code in memory safe languages written by some of the best to ever do it, hooked up to fuzzers 24/7.

I don't like it either. But that doesn't mean it's real.

yosh (@[email protected])

Big new Wasmtime security release today - 11 new CVEs found including 2 critical ones using LLMs. https://bytecodealliance.org/articles/wasmtime-security-advisories If LLMs can find this many critical bugs in a project that is as rigorous about security as Wasmtime, then get ready for projects with weaker security postures to do a lot worse. Like,,, actually.

Mastodon

@yosh @baldur Quoting from https://bytecodealliance.org/articles/wasmtime-security-advisories
"However, there was no fuzzing to check that invalid strings are handled correctly, and each of these issues could have concievably been discovered if such a fuzzing harness had been written."
And further more:
"Upon updating the formal model to check against the latest Cranelift lowering rules, verification flags the same bug as was found with the LLM search."

This is not a slam dunk for LLMs over traditional methods.

Wasmtime's April 9, 2026 Security Advisories

A new world for security-critical projects

Bytecode Alliance

@tkissing @yosh @baldur Is it a slam dunk in the sense of "traditional methods are dead and so is software security"?

Of course not.

But fuzzers are *also* probabilistic algorithms. LLMs add a lot more complexity to the potential analysis, are easier to operate for many, *and* currently made available far below true cost.

Of course this creates at least a temporary wave that is VERY real and not as easily achieved via traditional methods at this point in time.