Opus 4.6 uncovers 500 zero-day flaws in open-source code

https://www.axios.com/2026/02/05/anthropic-claude-opus-46-software-hunting

Exclusive: Anthropic's new model is a pro at finding security flaws

The AI company sees the model's advancements as a major win for cyber defenders in the race against adversarial AI.

Axios

It's not really worth much when it doesn't work most of the time though:

https://github.com/anthropics/claude-code/issues/18866
https://updog.ai/status/anthropic

[BUG] Auto-compact not triggering on Claude.ai (web & desktop) despite being marked as fixed · Issue #18866 · anthropics/claude-code

Preflight Checklist I have searched existing issues and this hasn't been reported yet This is a single bug report (please file separate reports for different bugs) I am using the latest version of ...

GitHub
It's a machine that spits out sev:hi vulnerabilities by the dozen and the complaint is the uptime isn't consistent enough?
If I'm attempting to use it as a service to do continuous checks on things and it fails 50% of the time, I'd say yes, wouldn't you?

If you had a machine with a lever, and 7 times out of 10 when you pulled that lever nothing happened, and the other 3 times it spat a $5 bill at you, would your immediate next step be:

(1) throw the machine away

(2) put it aside and call a service rep to come find out what's wrong with it

(3) pull the lever incessantly

I only have one undergrad psych credit (it's one of my two college credits), but it had something to say about this particular thought experiment.