I’m willing to believe that Anthropic built a better SAST. But that’s a total market of about $5B tops according to Google (some estimates seem to be just $0.5B) – it’s going to take a while to pay off their $30B Series G if they keep targeting these relatively tiny markets.

The same as with targeting developer productivity (another famously quite small market), they are focused on these markets because there are existing automated “bullshit-corrector” tools. In the case of software development, type checkers, linters, testing frameworks etc. In the case of memory corruption bugs, apparently they leant heavily on ASan to weed out the false positives.

Anyone who’s ever used a SAST on a mature code base knows that reducing false positives is the number 1 priority.

Also, in a parallel to recent articles about coding agents, finding vulnerabilities is not the bottleneck.

To be honest though, with quoted figures of $10-20,000 to find each of these vulns, I don’t think they’re going after the defender market...
@neilmadden to be fair to them: an entire campaign cost $20k, but each campaign found more than one bug, so the price per bug is much lower. In a talk, one of their researchers said that he's sitting on 100+ high confidence findings from their Linux kernel runs alone that he hasn't yet had the time to verify and report to the maintainers. Of course, that's still a lot of money per bug, no doubt about it, but not quite the $20k you are quoting.
@hacksilon yeah, for the OpenBSD bug they mention a “few dozen” other findings. But if they were good findings I think they would have said something about them. The fact they just say it as an aside with no elaboration suggests to me these other findings are probably a bit “meh”, but we’ll wait and see. Hopefully we’ll see the full list eventually, once disclosure has run its course.