📢 Rapport CSA/SANS : Claude Mythos d'Anthropic déclenche une tempête de vulnérabilités IA
📝 ## 🌐 Contexte

Publié le 13 avril 2026 par la Cloud Security Alliance (CSA), SANS Institute, [un]prompted et l'OWASP...
📖 cyberveille : https://cyberveille.ch/posts/2026-04-14-rapport-csa-sans-claude-mythos-d-anthropic-declenche-une-tempete-de-vulnerabilites-ia/
🌐 source : https://labs.cloudsecurityalliance.org/wp-content/uploads/2026/04/mythosreadyv4.pdf
#AISLE #Big_Sleep #Cyberveille

Rapport CSA/SANS : Claude Mythos d'Anthropic déclenche une tempête de vulnérabilités IA

🌐 Contexte Publié le 13 avril 2026 par la Cloud Security Alliance (CSA), SANS Institute, [un]prompted et l’OWASP Gen AI Security Project, ce document de stratégie (version 0.4) analyse l’émergence de Claude Mythos (Preview) d’Anthropic comme point d’inflexion majeur dans la découverte automatisée de vulnérabilités et l’exploitation offensive par IA. ⚡ Événement déclencheur : Claude Mythos & Project Glasswing Anthropic a annoncé le 7 avril 2026 le modèle Claude Mythos Preview, accompagné du Project Glasswing, décrit comme le plus grand effort de coordination multi-parties de l’histoire pour la divulgation de vulnérabilités. Mythos se distingue par :

CyberVeille

Silly Poll Time

When you're walking down a grocery store aisle and someone is coming from the other direction, do you ...

Please boost!
Polls only work if people see them.

#poll #walking #shopping #food #aisle

move over and walk on the right hand side
74.6%
move over and walk on the left hand side
19%
keep walking in the same position you were going.
6.3%
Poll ended at .

this looks like a genuinely good and very impressive use of “AI” in security research – I’m leaving the air quotes in place at the moment since I haven’t been able to find much detail on how the system actually operates. #AISLE describes it as an “autonomous analyser” and “the world’s first #AI-native Cyber Reasoning System (CRS) for vulnerability management” 🙄

I’m pretty sure it’s not just spicy autocarrot though, possibly a mix of deep learning or other machine learning techniques (things that I think of as part of “traditional” AI research) with a sprinkling of LLM on top for “natural language” capabilities (and it’s possible that they’re leaning into “AI” as a descriptor to assign to the current hype cycle rather than calling it “machine learning” but ¯_(ツ)_/¯ )

What AI Security Research Looks Like When It Works

“In the latest #OpenSSL security release on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned #CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL #CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

These weren't trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that's potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST's CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from #EricYoung's original #SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google's.

In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.”

https://aisle.com/blog/what-ai-security-research-looks-like-when-it-works

What AI Security Research Looks Like When It Works

What a year of finding zero-days in OpenSSL, curl, and the Linux kernel taught us about AI-driven security research done right.

AISLE

inappropriate inflatable...

#inappropriate #inflatable #aisle #animatedgif

inappropriate inflatable...

#inappropriate #inflatable #aisle #animatedgif
All Saints church, Wraxall, North Somerset
#Aisle #RoodScreen #altar #AllSaints #church #Wraxall #Somerset