Anthropic’s latest model, Claude Mythos, is so good at finding bugs in software that the company isn’t releasing it yet for safety reasons.

Instead it’s initially releasing the model to a consortium of companies like Apple , Microsoft and the Linux Foundation so they can fix their security bugs first.

Considering how disruptive Opus has already been to software development, 2026 will be a watershed year for software engineers

https://techcrunch.com/2026/04/07/anthropic-mythos-ai-model-preview-security

Anthropic debuts preview of powerful new AI model Mythos in new cybersecurity initiative | TechCrunch

The new model will be used by a small number of high-profile companies to engage in defensive cybersecurity work.

TechCrunch

@carnage4life

Finding bugs is easy, wading through the false positives is the really really painful part.

The tools we have are very good at finding bugs, what they are bad at is keeping false positives down. Some of them are really good.

I have to deal w/ this a lot and this is my experience. I spend a large chunk of my time dealing w/ false positives and from what I understand this is the general experience of anyone supporting a large code base.

We find a lot of real bugs but the pain is real.

Hell, if you throw fuzzers into the mix you can drop everything and just fix bugs for years in large enough projects. You can drown in fuzzer bugs. The fun is prioritizing them.