Mythos finds a curl vulnerability

yes, as in singular one. Back in April 2026 Anthropic caused a lot of media noise when they concluded that their new AI model Mythos is dangerously good at finding security flaws in source code. Apparently Mythos was so good at this that Anthropic would not release this model to the public yet but instead … Continue reading Mythos finds a curl vulnerability →

daniel.haxx.se

@bagder

AI powered code analyzers are significantly better at finding security flaws and mistakes in source code than any traditional code analyzers did in the past

I’m not sure this follows from what you’ve said in the rest of the post. Static analysers and fuzzers also made it very easy for people to find vulnerabilities and typically found a lot when they were deployed for the first time. And both were a lot cheaper to run than something like Mythos.

They aren’t finding as many vulnerabilities now because projects that are critical for security are integrating them into their CI flows.

And this is what always happens with some new technique: valgrind, Coverity, sanitisers, fuzzers, and so on: they’re released, they find a load of bugs that existing techniques failed to find, people fix them, they get integrated into regular CI runs, and the kinds of bugs that those tools find never make it into the tree.

Syskaller, for example, has found a lot more bugs in the Linux kernel than any Anthropic tools. And that’s just one fuzzing tool.

@david_chisnall i think it makes sense for everyone to run the "easy" and cheap tools first, and once they all find no more problems, then you bring out the bigger canons like AI analyzers. So yeah, which is "best" ? It probably depends.
@bagder @david_chisnall I'm not going to advocate actually doing this because it's expensive and I'm not a fan of the environmental impacts, but I am curious what it would find if you pointed it at the codebase from a time before the other precursor tools like fuzzers were in use. How many bugs can it find that you know with hindsight are there to be found?
@http_error_418 I agree, this would be a very interesting experiment - and potentially informative for other teams deciding where to spend limited developer time. @bagder @david_chisnall

@http_error_418 @bagder

The original Coverity paper claimed, as I recall, 300 CVEs. I'm not sure what the severity distribution was, but that seems a lot more than Mythos, and they probably used less compute than a single Mythos query.

The problem with any static analyser, whether it's based on formal reasoning or pattern recognition, is that it will be unsound (i.e. it will have false positives, in contrast with dynamic analyses that are incomplete and have false negatives). The LLM-based tools are no different in this respect. From a Claude 'comprehensive code review' of one of my projects, the only serious bug in the top ten that it found was one that already had an open PR to fix, and two were not only not bugs, they were intentional design choices and doing it the other way would have caused serious performance regressions (and not fixed bugs).

The thing that does make Mythos different is that it tries to build a PoC exploit. This will reduce the false positive rate, at the expense of creating false negatives (if it can't produce a PoC, you ignore it).

When I've used Coverity on a large project, it's found tens of thousands of bugs, and most of them are false positives, so it requires a lot of effort to find the ones that are actually important bugs. Something that produces PoCs automatically would help this a lot.

The baseline data point I'd really like to see is something that integrates the clang analyser with libFuzzer. For each report the analyser finds, insert profiling points at the branches on the control flow chain that it recommends, then automatically drive the fuzzer to try to trigger the code paths that the analyser reported as potential issues.

The default settings for the clang analyser are compilation-unit-at-a-time and with reduced bounds on loop iteration counts to avoid using enormous amounts of memory. If you're willing to spend as much money as it costs to operate the LLM-based tools, you can use the cross-compilation-unit approaches and bump the state up a lot. Running it configured to use a comparable amount of RAM to the GPUs that the Anthropic models run on would let you do a lot of symbolic execution.