Dan Kennedy    

452 Followers
140 Following
548 Posts
AppDev, AppSec VP, FinCo CISO now Research. Spend my days talking to CISOs. Tweets and opinions are my own, a10wn. #infosec
Bloghttp://www.praetorianprefect.com
Twitterhttp://www.twitter.com/danielkennedy74
LinkedInhttps://www.linkedin.com/in/danieltkennedy/
Publicly available researchhttps://blog.451alliance.com/author/dkennedy/

Honestly, in a sea of lame superficial AI labor replacement takes, it was refreshing to see something at #RSAC that drives at an outcome that will actually resonate with SOC folks.

“Christ you’be gotten big, Timmy. What’s that glowing yellow thing that’s hurting my eyes?”

Seen on the floor #RSAC2026, solid NJ band. Fun fact, they used my old basement TV in one of their videos. Well, fun for me anyway…

𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗳𝗼𝗿 𝗔𝗜 𝗶𝘀 𝗰𝗿𝗲𝗮𝘁𝗶𝗻𝗴 𝗮𝗻 𝗲𝘅𝗽𝗲𝗿𝘁𝗶𝘀𝗲 𝗽𝗮𝗿𝗮𝗱𝗼𝘅

Three years ago, early generative AI integrations in security operations platforms primarily took the form of chat interfaces within their tooling ecosystem. These interfaces enabled natural language queries, incident summarization and the potential automation of routine investigative tasks. Vendors framed early use cases around the ability to uplevel junior or Tier 1 analysts in security operations centers (SOC). Several years into broader GenAI and agentic integrations, that upskilling narrative appears displaced. Security leaders now report that the primary beneficiaries of AI-assisted workflows are senior analysts rather than junior staff. About 72% of respondents to this study note that senior professionals, who recognize hallucinations in output and can course-correct in prompts, benefit most from leveraging AI integrations. Only 28% believe junior employees derive the primary benefit, generating output with AI they wouldn’t otherwise be able to produce. The implications of this are profound in security and beyond. AI may compress the labor hierarchy by automating tasks that were once performed by trained future experts.

Human intervention in AI technology continues to be necessary for optimal results. The results from our Organizational Behavior 2025 survey are not entirely unexpected: If humans will remain “in the loop” to check the results of AI, it will be seasoned experts, humans who have built up tacit knowledge through thousands of repetitions of the work that AI now performs, who will most readily differentiate correct from incorrect results. Moreover, they can offer course correction and evaluate the results of multiple models to determine the best fit for any task. Research also suggests that giving AI models more sophisticated prompts improves the likelihood or receiving comprehensive and correct results.

AI is already affecting the entry–level hiring market, raising several serious questions. If the lower rungs of career ladders are knocked out by AI taking over tasks that were formative learning opportunities for new employees, what will replace this knowledge-creation activity? Who will be the senior employees to provide the necessary human-in-the-loop functions if people do not have paths to gain that experience? Even major AI developers have begun examining this issue. Research released by Anthropic found that programmers who rely heavily on AI assistance perform significantly worse when later asked to explain or reason about the code produced. That suggests that as automation increases, engineers must retain the ability to detect errors and guide model output. This is a skill that will erode, or may never be built up in the first place, if uncritical over-reliance on AI output becomes the norm.

https://blog.451alliance.com/security-for-ai-is-creating-an-enterprise-paradox/

i feel like some percentage of the vibe coded successes enterprise CIOs and CTOs are claiming are like Saddam Hussein's scientists lying to him about how close they are to having working WMDs.

They're always 'internal tools', and when you ask for details, the details are scant.

CTO: "Yeah boss, we're doing great things down here, it's a real paradigm shift, our devs don't even write code anymore. The AI is manifesting the app, sensing intent, generating in real time."

CEO: "Great, can I see it?"

CTO: "Um, no."

CEO: "This is 80,000 lines that say 'hello world' with emojis?"

CTO: "I need more budget for API credits."

To be sure, Claude Code and the rest are useful as hell, but I can't take anymore of the hype stories without the proofs.

This AI stuff narrative with respect to the stock market is getting so dumb...

https://seekingalpha.com/news/4554814-cybersecurity-stocks-fall-after-anthropic-unveils-claude-code-security

None of those companies are doing what Claude Code Security is trying to accomplish. The public companies that do AST/SCA have it as a small part of their offering. The others are all private (PE, VC).

Even if you could somehow weave some 'there will be less firefighting if code is perfect' fantasy, it falls down immediately when you consider the significant percentage of security incidents that don't start with a code vulnerability.

The AI responds to insults, but idk...the Skynet thing...

So...less emergent AI and more a giant API key leaking machine? (being facetious, but beware the vibe)

https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys

I don't usually get a laugh from auto-reply emails, but this one struck me.

Basically, "I don't do email, I'll reply when I get around to it."

Is it wrong to find this honesty strangely refreshing?

Damn the economists, build the data centers!

IoT systems, along with internet-accessible operational technology (OT) deployments that bypass legacy isolation, represent an ongoing information security challenge. Key factors include the variety of non-standard endpoints, legacy approaches to remote access for maintenance, adjacency to vulnerable IT networks, and an ever-increasing attack surface as technology modernizes. While vendor solutions continue to grow, so does the threat landscape, exemplified by malware targeted at industrial control systems and the formation of large IoT botnets. Because these systems are vital to critical infrastructure, they are often a target for nation-state actors.

In recent years, the US Cybersecurity and Infrastructure Security Agency (CISA) and former government officials have issued warnings regarding advanced persistent threat actors such as Volt Typhoon, allegedly backed by the Chinese state, that have infiltrated and maintained access to critical infrastructure, including energy and water utility systems. Ransomware attacks that move laterally from IT to OT networks, or that use the former to disrupt the latter, remain an issue, following the model of the 2021 Colonial Pipeline attack and continuing with breaches such as the one affecting major US steel producer Nucor Corp. in 2025. Canadian utility Nova Scotia Power also reported a ransomware attack that disrupted the ability to read billing information from customer smart electrical meters.

When survey respondents are asked to identify threats to IoT systems, the discussion quickly extends past IoT endpoints themselves. In a 2023 study conducted by S&P Global Market Intelligence 451 Research, the top cited IoT security threat was unpatched application security vulnerabilities, reflecting the difficulty of patching IoT and OT devices after they have been deployed. That dropped to second in 2024, superseded by attacks against a centralized control point, reflecting potential shifts in the behavior of threat actors. In our latest study, vulnerable IoT databases or data stores (32%) have risen to become the top concern. Unpatched application security vulnerabilities (28%) remain second, followed by attacks against unsecured networks between device endpoints and central control points (27%).

https://blog.451alliance.com/transition-from-isolation-to-exposure-brings-evolving-threats-to-iot-and-ot-systems/