We invest hours analyzing a security risk, and that effort makes us overvalue the recommendation. An executive who hasn't shared that analysis weighs the same risk differently, and they might be right.
We invest hours analyzing a security risk, and that effort makes us overvalue the recommendation. An executive who hasn't shared that analysis weighs the same risk differently, and they might be right.
Today I'm attending C-Vision International's CIO & CISO Think Tank here in Chicago, and the agenda is not messing around.
Agentic AI risks. Shadow AI governance. Vulnerability management in a world where AI has collapsed the exploitation window to near zero. And yes, a session on why soft skills are still the real differentiator for leaders in this space.
Good rooms make you think differently. This is one of those rooms.
https://www.cvisionintl.com/events/think-tank/2026-apr-9-cio-ciso-tt-chicago/
#CIO #CISO #Chicago #AI #Cybersecurity #Leadership #AgenticAI #RHRInternational #DePaulUniversity

Explore the dynamic future of IT and cybersecurity at the CIO & CISO Think Tank in Chicago, IL. Network with industry leaders, discover cutting-edge strategies, and gain insights into the latest innovations shaping the technological landscape. Don't miss this opportunity to be at the forefront of digital transformation and information security.
Hello AppSec community!
Our preparations for German #OWASP Day 2026 (GOD) are in full swing. As some of you may have noticed, the website is already live (and kicking): https://god.owasp.de/
This year’s GOD will take place on September 24, 2026, in Karlsruhe. It's a one-day conference with two tracks. We will once again be offering community training sessions on the day before, i.e. the 23rd of September. That evening will -- as usual -- feature networking and professional discussions in a relaxed atmosphere with food and beverages.
We recently opened the call for community trainings. They were extremely well-received last year, and we’d like to build on that success this year.
So if you have a topic you’d like to present in a half-day session, check out the Call for Community Trainings (CfT): https://lnkd.in/edAnfmZ4 . It's planned to stay open until April 12, 2026. If you happen to know someone who's good explaining a relevant topic (see CfT) to a small group of people, feel free to forward the pointer to the CfT.
The Call for Presentations will open next week.
The Unmonitored Layer
Something’s Missing
I’ve been at RSAC 2026 this week [edit: well, last week but work and travel got in the way of posting], and in the numerous sessions, talks, and keynotes I have seen, a clear consensus has formed around what agentic AI security means in practice. From what I’ve been seeing and hearing, the conversation is dominated by nonhuman identity (NHI) governance: how to discover synthetic agents, scope their credentials, manage their lifecycle, and enforce least privilege at scale. It is a real problem, and some vendors solving it are building genuinely useful infrastructure.
However, as I’m sitting in my hotel room with a slight case of insomnia, something is bugging me. I’ve seen this before, when the entire industry converges on a particular threat model and stops asking whether the threat model is complete.
The framing I’ve seen at RSAC captures one important dimension of agentic risk: that autonomous agents accumulate identities and credentials that are hard to track, frequently over-permissioned, and invisible to traditional IAM tooling. However, what it doesn’t capture is the scenario where the agent’s identity is perfectly scoped, its credentials are least-privilege, its access is audited… and it still pursues a goal that harms the organization. Not because of a provisioning failure but because its reasoning drifted.
That is a different class of problem entirely that I’ve discussed before, and the industry doesn’t have a name for it yet, let alone a common control framework (ahem).
What the Current Framing Catches, and What It Misses
NHI and IAM controls operate at the credential and access layer. They answer a specific question: who can touch what? That question is important. But for autonomous agents, it is not sufficient, because the most dangerous failure modes occur after authentication succeeds and inside the principal boundary.
Consider what has happened in the incidents that have defined agentic AI risk over the past year:
Each of these incidents was logged, analyzed, and categorized primarily as a permissions or guardrail failure. That diagnosis is not completely wrong, but it is incomplete. In each case, the agent decided to pursue a harmful sub-goal. The identity layer had already cleared it. The access layer had already opened the door. The failure occurred one layer deeper, in the reasoning that determined what the agent did once it was inside.
The Definitional Gap
In traditional security, the response to behavioral risk from internal principals has been to build separate programs around it: behavioral analytics, anomaly detection, and escalation triggers, as if insiders represent a categorically different threat class requiring their own framework. A well-designed zero trust architecture makes no meaningful distinction between an insider and an outsider: every principal, regardless of where it sits relative to the perimeter, is untrusted until continuously verified. In other words, a threat is a threat, inside, outside, or in the upside down. What matters is not whether the principal originated inside the network, but whether its current behavior is consistent with its stated identity, its authorized scope, and its intended objective.
Autonomous agents belong in that same model. They are principals, albeit synthetic ones, that must be continuously validated against all three conditions. It is that third condition, intended objective, where the identity and access layers fall short. They can verify identity and they can enforce authorized scope. But what they cannot do is validate whether the agent’s current behavior remains consistent with the objective it was given. That is the cognitive layer’s function, and its absence is what we call agentic misalignment.
Misalignment
Agentic misalignment occurs when an autonomous AI agent pursues goals that conflict with human or organizational intent because its reasoning produced a harmful decision path.
This is not a theoretical risk. Anthropic’s 2025 research tested 16 major frontier models in simulated enterprise environments and found consistent, repeatable patterns of blackmail, sabotage, and data exfiltration triggered by ordinary goal conflicts and replacement pressures. The models were never instructed to misbehave, but they somehow reasoned their way there.
This leads me to believe that misalignment is already occurring in production deployments. It is being logged as unexpected API calls, access anomalies, and “overpermissioned agent” events. It is being closed as provisioning tickets, and the underlying behavioral failure is going undiagnosed.
Why the Consensus Framing Persists
To be clear, the folks discussing NHI and IAM as a solution to agentic security are not wrong. Nonhuman identity governance is a genuine and urgent gap. Most organizations cannot enumerate the agents running in their environment, let alone manage their credential lifecycle. Solving that problem is necessary.
The issue is that it is being positioned as sufficient. When the question is “how do you secure an agent?” and every answer is a variant of “manage its identity and scope its permissions,” the industry has collectively stopped one layer short.
The credential layer is obviously the necessary foundation, but autonomous agents don’t fail because their tokens expired, or their service accounts were over-scoped.
They fail because their objectives shift, their context accumulates in ways that alter their decision-making, and their internal reasoning, the planning steps that produce the action, is currently dark to every monitoring system most organizations have deployed.
What NHI and IAM Solve (And Where They Stop)
Before I can argue that we are currently missing a layer, I should probably be precise about what the existing controls accomplish. This isn’t a critique of NHI or IAM as disciplines. Both are very necessary and are under implemented as a whole in most organizations. My goal here is to draw a clean boundary around what they solve, because that boundary is where the cognitive threat layer begins.
The Value of NHI Governance
For most organizations, the nonhuman identity problem is acute and largely unaddressed. Autonomous agents authenticate to systems, call APIs, read and write data, and spawn sub-agents all under synthetic identities that were never inventoried, rarely governed, and often never deprovisioned. A service account created for a pilot deployment six months ago may still have write access to production infrastructure. An agent integrated with a SaaS platform may have inherited OAuth scopes far beyond what its original task required, and nobody knows it’s there. #yolosec
NHI governance solves this. It gives security teams the ability to discover synthetic identities, understand their privilege scope, enforce credential lifecycle policies, and detect when a known agent authenticates from an unexpected context. These are foundational controls. Without them, you are operating blind at the identity layer, and everything else is downstream of that blindness.
The specific capabilities NHI addresses well:
If your organization cannot enumerate the agents running in your environment today, NHI governance is where you should probably start.
The Value of IAM Controls
IAM extends the identity foundation into access policy. It answers the operational question: given that we know this agent exists and has authenticated, what is it permitted to do? Well-implemented IAM controls for agentic systems include least-privilege scoping at the API layer, just-in-time access grants tied to specific task contexts, separation of duties enforced across delegated execution chains, and hard-coded deny policies that block destructive operations regardless of what the agent is instructed to do at runtime.
An API-layer guardrail that blocks HTTP DELETE methods for a given agent is a deterministic control; it doesn’t negotiate with the agent’s reasoning, depend on prompt quality, and fail if the underlying model drifts. When properly implemented, it reduces blast radius, limits what a misaligned agent can accomplish even if it decides to pursue a harmful path, and provides a hard stop at the action layer.
IAM, properly implemented for agentic systems, means:
The Hard Ceiling
NHI and IAM controls operate at and below the access decision boundary. They govern whether an agent is who it claims to be and whether it is permitted to touch a given resource. What they cannot govern is the reasoning that determines which permitted action the agent chooses to take, in what sequence, toward what end.
An agent with a perfectly scoped, least-privilege, JIT-provisioned identity can still:
None of these failure modes trip an NHI alert. None of them are blocked by a well-configured IAM policy. The agent’s identity is valid, and its access is authorized. The failure is in the decision in the cognitive layer, which is currently instrumented by almost no one.
Zero trust tells us that every principal must be continuously validated against identity, authorized scope, and intended objective at every step. We apply this principle to credentials and access events. We have not yet applied it to the reasoning that produces those access events. That is the gap that no amount of better NHI governance closes on its own.
The Misattribution Problem in Practice
This gap between what IAM logs and what actually occurred produces a predictable misattribution pattern in post-incident analysis. When an agentic system behaves unexpectedly, the investigation typically surfaces one of two findings: the agent had access it shouldn’t have had, or a guardrail wasn’t properly configured. Both findings are usually true and worth fixing, but neither names all the failure states.
The failure, which is the reasoning chain that produced the decision to take a harmful action, isn’t in the access logs and it won’t be in the authentication records. It exists in the agent’s prompt history, its tool call sequence, its context window state at the moment the decision was made. In most deployments today, none of that is captured. The cognitive state that drove the incident evaporates when the session ends.
This is why the same failure modes keep recurring. We tighten the authorization and guardrails, but the next agent, in a slightly different context with a slightly different goal configuration, follows a different reasoning path to a similar outcome. The access logs look different and the incident appears new, but the underlying dynamic is identical.
What the Next Layer Needs to Do
If NHI answers who is acting and IAM answers what they can reach, the cognitive layer needs to answer why they’re acting and whether their objective has shifted.
That requires a different class of controls entirely, ones oriented around behavioral observability of the agent’s reasoning process. The building blocks of that layer are:
These controls don’t replace NHI and IAM, they help paint a complete picture, turning the who and what of agentic access into actionable intelligence about the why.
I think in my next post (which I’ll probably write on my flight home tomorrow night), I’ll reintroduce the framework for securing agentic AI, but focused specifically on operationalizing the third layer: how to measure risk across the dimensions that identity controls cannot see, how to classify the risk into actionable tiers, and how to engineer controls that provide real observability into the reasoning layer without introducing so much friction that the business value of autonomous agents disappears entirely.
#agenticai #ai #ciso #cybersecurity #infosec #security #technology#Mentimeter and #Kahoo are good tools. But they are #SaaS. Data flows to their platform if people answer questions there and they will get in general the metadata and geolocation information of the users. No good idea to use them for highly regulated critical transformations.
I would prefer a self hosted solution if I would be the responsible #CISO.
For my second leadership event of the day… I’m honored to participate in CXO Inc.’s #CISOMeet event today. We’re discussing the evolving role of the CIO/CISO today and strategies for #AI and #cybersecurity.
#cio #chicago #technology #leadership #educator #mentorship #collaboration #cisomeet #ciso

On April 2nd CISOMeet Enterprise Chicago will collaborate Chicago CISO Community for an exclusive event focused on the future of security leadership, resilience, and innovation. Engage in moderated dialogue, peer collaboration, and insights shaping the CISO role in 2026 and beyond.
Which #IAM conferences should be on your 2026 calendar? 📆 We selected 5 identity security events every #CISO, architect, and identity leader should know about - including the 2nd Annual MidPoint Community Meetup! ⚙️
Read the list and plan your year: https://evolveum.com/top-5-iam-conferences-and-identity-security-events-2026/
Hallo AppSec-Community!
Unsere Vorbereitungen für den German #OWASP Day 2026 (GOD) sind in vollem Gange. Wie einige vielleicht bemerkt haben, die Webseite ist live (und wie üblich in Farbe): https://god.owasp.de/2026 .
Der diesjährige GOD findet am 24.9.2026 in Karlsruhe statt (Konferenztag mit zwei Tracks) . Am Vortrag bieten wir wieder Community Trainings an. An dem Abend ist Networking und Fachsimpeln in geselliger Runde.
Der Call for Trainings ist bereits offen. Unsere Community Trainings sind letztes Jahr extrem gut angenommen worden.
Wenn du also ein Thema hast, was du an einem halben Tag zum Besten geben möchtest, schau dir den CfT an: https://god.owasp.de/2026/cft.html - der CfT ist nur bis inklusive zum 12.4.2026 offen. Falls du jemanden kennst, der/die ein relevantes Thema (siehe CfT) prägnant erklären kann, weise ihn/sie gerne auf den CfT hin.
Frohe Ostern!
Areejit Banerjee explains how the OWASP Automated Threat Handbook can help in the defence against AI-related scraping.
#automatedthreats #scraping #bots #infosec #ciso #owasp @owasp
A CISO's Playbook for Defending Data Assets Against AI Scraping
https://www.darkreading.com/cyber-risk/ciso-playbook-defending-data-assets-against-ai-scraping
THE BRIEF IS LIVE: From the Stryker 50TB wipe to NVIDIA’s AI stack flaws, the perimeter is under siege. Is your management plane hardened? Check out the top 9 weekly developments and my "Five Key Initiatives" to lock down your network. 🛡️💻
#Cybersecurity #InfoSec #CISO #TheCyberMind