Meme of the month!
—
And of course, if you’re interested in discovery and security of agents and coding assistants, send me a message or check out Knostic at https://knostic.ai/
| Security | Science fiction |
| Geek | Dancer |
Meme of the month!
—
And of course, if you’re interested in discovery and security of agents and coding assistants, send me a message or check out Knostic at https://knostic.ai/
Knuth and Linus are in the vibe coding camp now. What’s your excuse? If you tried it more than a few weeks ago, you should try again.
“But then I cut out the middle man — me — and just used Google Antigravity”. - Linus Torvalds
And Knuth in the screenshot is all about our lord and savior Claude Code.
(https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf)
Would you like to chat with [un]prompted con about AI security? Follow a thread across every session, brief your team, or just base your research on the knowledge collected?
For both conference days, every talk with full transcripts and slides was loaded into a NotebookLM, and [un]prompted became more than just a hybrid online/physical con, with this one of many examples of what an AI-native conference could look like.
And there is no reason for a conference to end when you walk out the door. We can engage with the content and attendees beyond transcripts, summaries, and Slack.
And yes, as it’s NotebookLM you can always use it to generate a podcast - on any topic 🙂
This is the brainchild of the brilliant Rob T. Lee (which you shouldn’t be surprised about when it comes to Rob). All I had to do was get out of the way.
It was diligently executed on by Julie Michelle Morris, who sat through endless sessions to make it happen. And, empowered by Emanuel Gawrieh and Dragos Ruiu, who immediately jumped in without a second thought, and helped set up the system.
Access it here:
🔗 https://notebooklm.google.com/notebook/78ee3710-1741-488d-af06-159f518e9510?authuser=1
Thank you Rob and team for stepping up, and helping make the conference what it became. We live in the future.
Toward day 2 at [un]prompted, prepare yourself to.. deal with people. I’d prefer dogs, too, but y’know what? These folks aren’t half bad!
The talks are solid, but this con is about community, and relationships. There is something new happening here, and watching CISOs, researchers, and threat hunters mingle and talk context windows is a special thing to behold, indeed.
Use the baseball cards to find people relevant to what you do, who can help you, and whom you can help.
Talk about presentations on Slack with physical and online attendees, and make fun of Dragos Ruiu as he debugs the magic of the crazy online setup and forgets to turn off his mic.
Vibe code together.
Feel the wonder of being alive in 2026… and I’ll just get off my soap box again.
Logistics:
- Show up at 8:30 am.
- This evening is free-form. Suggest food locations for dinner on Slack, around 4 pm, and gather a group of old and new friends for the experience.
And do thank a volunteer when you see them, these are CISOs, researchers, a billionaire or three, a professor, someone’s SO, and all around awesome people who spend their con just helping out.
See you soon!
Pic: [un]prompted meme of the day, looking cool and collected, but paddling like crazy under the surface.
Knostic is open-sourcing OpenAnt, our LLM-based vulnerability discovery product, similar to Anthropic's Claude Code Security, but free. It helps defenders proactively find verified security flaws. Stage 1 detects. Stage 2 attacks. What survives is real.
Why open source?
Since Knostic's focus is on protecting coding agents and preventing them from destroying your computer and deleting your code (not vulnerability research), we're releasing OpenAnt for free. Plus, we like open source.
...And besides, it makes zero sense to compete with Anthropic and OpenAI.
Why OpenAnt is Different:
- A "unit" in OpenAnt is some code block (e.g., function, module, etc.) along with additional metadata that allows the LLM to analyze it with the proper context.
- Adversarial reflexion: Validating vulnerabilities with constrained personas
You can just download OpenAnt, but:
- Want it managed?
If you'd like us to manage it for you, plug-and-play it into your CI/CD, leave us a note on the project page to get on the waiting list.
- Scan your open source!
Submit the form on the project page for us to scan your repo for you.
Links:
- Project page:
https://openant.knostic.ai/
- For technical details, limitations, and token costs, check out this blog post:
https://knostic.ai/blog/openant
- To submit your repo for scanning:
https://knostic.ai/blog/oss-scan
- Repo:
https://github.com/knostic/OpenAnt/
--
And if you're looking to defend your agents and coding assistants, not to mention preventing them from deleting your computer or code, do check Knostic out, or just message me for a demo.
The hero we need.
Original, found via Imri Goldberg.
https://x.com/steipete/status/2026826887159235027
Has Anthropic killed AppSec? Is this the SASTpocalypse (TM)? Let’s be specific. In a list.
TL;DR:
1. With code Secure-at-Generation, and tested for security without false positives, most *current* AppSec dies. Most incumbents will disappear, some are adjusting. There are endless opportunities.
2. Innovative disruptors like my own startup, Knostic, can step in. We were lucky to focus on AI agents security and the agentic pipeline before it became a top CISO priority, and have a strong product ready, in the market.
To the list!
- Immediate Term/AppSec Can Adjust:
1. AppSec is change management. You pursue people on overdue tasks. It is built to be behind
2. It relies on developers fixing what they don’t want to/understand
3. SAST is slow, language/codebase dependent, and is after the fact. Attempts to do shift-left fail for the majority, and add burden to the dev (SAST flow mapping like AST/CodeQL DB remains useful)
4. Endless false positives that don’t scale even before AI.
- Soon/Changes in How Things Work:
1. You don’t need to create rules for everything
2. You can discover 10x the vulnerabilities (combined with/without static analysis and fuzzing)
3. Context enrichment is possible
4. Cost of model usage is dropping drastically
5. Patching will be auto-generated for third party software, but that’s now a cyber defense/vulnerability management issue.
- Where AI is Different:
1. Code itself is slowly becoming Secure-at-Generation
2. Threat modeling can be automated at scale at plan mode
3. Dev-specific/code-base specific issues can be fixed at the agent rules level, dynamically updated for the dev's issues
4. Code review fully automated, no one can keep up and the tools are already better than humans for most needs
5. LLM-based static analysis is already at such a good level even without scaffolding, model only, it’s ridiculous, and it will be fully commoditized to the level of just running a linter.
- New Challenges in a Different Realm:
1. AI agents are the new risk and opportunity
2. IT itself is now fragmented, where everyone is creating their own infrastructure
3. The new “agentic pipeline” is: pre-CI/CD, in runtime, and reports to the SOC. It includes non-coders who have no idea it exists. And, it’s in no way managed
4. The capabilities of vulnerability researchers are proliferating to the analyst level. Pen testers using these tools and creating VulnOps solutions will scale it to a place where classic AppSec isn’t really needed.
AppSec will grow, and focus on orchestration.
Incumbents can’t easily pivot, but there are brilliant people in this space from Chris Wysopal to Neatsun Ziv to Avi Douglen to Josh Grossman to James Holland to Nir Valtman to Clint Gibler to Luke O'Malley to Isaac Evans to Rock Lambros to Michal Kamensky.
Some companies will adjust to this severe threat to their existence, and bring value. And I can’t wait to see it. Most won’t.
—
Want to see a demo from Knostic? Ping me.
Two AWS outages, including the 13 hours one, were caused by coding agents deleting what they shouldn’t, in production.
AWS points to the engineers:
“We’ve already seen at least two production outages,” one senior AWS employee told the publication. “The engineers let the AI resolve an issue without intervention. The outages were small but entirely foreseeable.”
Coding agents are everywhere, they are privileged, we have zero visibility into what they do, and they bypass classic CI/CD and cyber defense controls.
Even finance is using them. I’m unsure how much engineers could actually do to prevent these incidents.
If you are concerned by these, I’d start by asking the following questions:
- What coding agents are you running?
- What MCP servers, extensions, or skills are your developers using?
- Do you have preventative controls for agents stepping out of bounds?
- Do you have detection and response capabilities to stop attacks?
—
And if you made it this far, defending agents is what I do.
If you want to see a demo of how you can discover and protect agents, drop me a line.
Knostic has been doing this for a while, unlike the 50 vendors now rebranding themselves in the space. :)
Ray Dalio strategic analysis of world order, mentioning cyber in these wars as he defines them: