#RSAC
Principal and Founder, Numberline Security
Co-Chair, Zero Trust Working Group, Cloud Security Alliance
Human Being, Planet Earth
| Numberline Security | https://numberlinesecurity.com/ |
| Numberline Security | https://numberlinesecurity.com/ |
Can’t make it to RSA Conference 2026 next week? Well, I’m taking one for the team here and will be recording my thoughts, impressions, and complaints through a series of informal, unedited, and unscripted videos. Stay tuned, and if you have specific vendors, questions, sessions, or topics you’d like me to cover, let me know here in the comments.
#RSAC

What happens when a 5-year old VPN vulnerability gets exploited?
Sadly, this is not the lead-in to a punchline, nor is it theoretical.
The answer is “bad things happen”, and in fact this is exactly the case with a current Fortinet VPN vulnerability.
Watch my brief video commentary to learn more, and get our free Dynamic VPN Defense Guide
https://youtu.be/FCl4utJiSq4?si=i8enao8Us3TYteHL

Let’s talk about yesterday’s Cisco vulnerability, and why it matters to you even if you don’t have their Email Gateway or Web Manager products:
CVE-2025-20393
#CiscoVulnerability
Why is governance the hidden foundation of Zero Trust policies?
Watch as Jerry Chapman and I talk through this in a brief video snippet from our recent fireside chat.
What’s the most important aspect of AI Security?
In one word: Data
I want to talk about three different ways in which enterprise data can be used within AI models, and the security implications of each.
But first, the bad news. From an information security perspective, there are no shortcuts. Using AI systems properly will require a certain degree of data protection rigor, which means you need to have defined and enforceable data governance processes.
Well, maybe that’s not bad news. Viewed positively, this should act as a catalyst for imposing the necessary structure on your organization. This can provide you with an opportunity to propose and enact changes to your enterprise, as part of your charter to enable the business to securely adopt new technologies.
Let’s dive into the promised three ways that enterprise data can be utilized within AI systems.
Training data to create custom AI models
Prompt data for using AI models
Data retrieved and consumed by AI models during their operations
In scenario number 1, your developers will be selecting which data sets to use, to train the AI model. In this scenario, your organization needs to make sure that you carefully choose whether to use a vendor-hosted AI model, or a privately hosted one. And, for vendor models, to have your legal team review the terms and conditions to ensure you have visibility into how this data is treated once it’s used for training purposes.
For scenario number 2, you need to rely on user education, to ensure that people only use allowed AI systems, which again need to be reviewed and approved by your legal team. And to ensure that people understand not to use confidential information in AI prompts.
Finally, for scenario 3, you need to collaborate with your developers to ensure that runtime queries from an AI system to your private data happens in a secure and controlled manner. You need to enforce guardrails so that the AI model doesn’t inadvertently get access to more data than intended.
In all three cases, it’s incumbent on enterprises to understand what the data is, how it’s being used, and the security or privacy implications of its use. Security teams need to proactively work with business teams to educate them about the risks of AI, and about how to use it safely.
Want to talk about this in more depth? Reach out, and I'd be happy to chat about how to better secure AI model usage of your enterprise’s data.
This video is real…does it matter?
(plus: why the CIA triad needs an additional letter)