Dan Kennedy    

450 Followers
140 Following
547 Posts
AppDev, AppSec VP, FinCo CISO now Research. Spend my days talking to CISOs. Tweets and opinions are my own, a10wn. #infosec
Bloghttp://www.praetorianprefect.com
Twitterhttp://www.twitter.com/danielkennedy74
LinkedInhttps://www.linkedin.com/in/danieltkennedy/
Publicly available researchhttps://blog.451alliance.com/author/dkennedy/
Seen on the floor #RSAC2026, solid NJ band. Fun fact, they used my old basement TV in one of their videos. Well, fun for me anywayโ€ฆ
Let me Delve into this SOC2 report you just sent...

๐—ฆ๐—ฒ๐—ฐ๐˜‚๐—ฟ๐—ถ๐˜๐˜† ๐—ณ๐—ผ๐—ฟ ๐—”๐—œ ๐—ถ๐˜€ ๐—ฐ๐—ฟ๐—ฒ๐—ฎ๐˜๐—ถ๐—ป๐—ด ๐—ฎ๐—ป ๐—ฒ๐˜…๐—ฝ๐—ฒ๐—ฟ๐˜๐—ถ๐˜€๐—ฒ ๐—ฝ๐—ฎ๐—ฟ๐—ฎ๐—ฑ๐—ผ๐˜…

Three years ago, early generative AI integrations in security operations platforms primarily took the form of chat interfaces within their tooling ecosystem. These interfaces enabled natural language queries, incident summarization and the potential automation of routine investigative tasks. Vendors framed early use cases around the ability to uplevel junior or Tier 1 analysts in security operations centers (SOC). Several years into broader GenAI and agentic integrations, that upskilling narrative appears displaced. Security leaders now report that the primary beneficiaries of AI-assisted workflows are senior analysts rather than junior staff. About 72% of respondents to this study note that senior professionals, who recognize hallucinations in output and can course-correct in prompts, benefit most from leveraging AI integrations. Only 28% believe junior employees derive the primary benefit, generating output with AI they wouldnโ€™t otherwise be able to produce. The implications of this are profound in security and beyond. AI may compress the labor hierarchy by automating tasks that were once performed by trained future experts.

Human intervention in AI technology continues to be necessary for optimal results. The results from our Organizational Behavior 2025 survey are not entirely unexpected: If humans will remain โ€œin the loopโ€ to check the results of AI, it will be seasoned experts, humans who have built up tacit knowledge through thousands of repetitions of the work that AI now performs, who will most readily differentiate correct from incorrect results. Moreover, they can offer course correction and evaluate the results of multiple models to determine the best fit for any task. Research also suggests that giving AI models more sophisticated prompts improves the likelihood or receiving comprehensive and correct results.

AI is already affecting the entryโ€“level hiring market, raising several serious questions. If the lower rungs of career ladders are knocked out by AI taking over tasks that were formative learning opportunities for new employees, what will replace this knowledge-creation activity? Who will be the senior employees to provide the necessary human-in-the-loop functions if people do not have paths to gain that experience? Even major AI developers have begun examining this issue. Research released by Anthropic found that programmers who rely heavily on AI assistance perform significantly worse when later asked to explain or reason about the code produced. That suggests that as automation increases, engineers must retain the ability to detect errors and guide model output. This is a skill that will erode, or may never be built up in the first place, if uncritical over-reliance on AI output becomes the norm.

https://blog.451alliance.com/security-for-ai-is-creating-an-enterprise-paradox/

At the airport:

Is this the end of the group 2 line?

โ€œI donโ€™t know, Iโ€™m group 5, I just get on whatever line.โ€
/returns to cell phone call
โ€œSo anyway, I got a full scholarship to the best MBA program in the country.โ€

โ€”-

Provides some idea of how business decisions get madeโ€ฆ

And in 'easily predictable outcomes' news, thanks again chainsaw guy, will mop person ever be making an appearance?

https://techcrunch.com/2026/03/10/doge-employee-stole-social-security-data-and-put-it-on-a-thumb-drive-report-says/

DOGE employee stole Social Security data and put it on a thumb drive, report says | TechCrunch

A whistleblower is accusing a former DOGE member of stealing a large number of Americansโ€™ personal data while he was working at the Social Security Administration, with the plan of using it at his new job.

TechCrunch

RE: https://infosec.exchange/@danielkennedy74/116133378608952412

Also, it will never be ready for not having a 'human in the loop' when it comes to lethality, and I'm not sure why a whole lot of innocent people will have to die to come to that conclusion.

The script to Terminator isn't a defense plan.

We can just, you know, think ahead, and start writing the mutual arms treaties now.

So again...we're ok with autonomous AI in these scenarios because 'that's what the enemy will do', or the rule of law, or something, even though one of the big AI innovators, maybe the big one when it comes to actual technical chops, says it's definitely not ready for that and he's now unintentionally in a pissing contest with an ex-cable news host?

I mean, you know it's not in his business interest to publicly say it's not ready, and engage in this stand off, and he's worried enough to be doing it anyway.

Just checking...

https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/

AIs canโ€™t stop recommending nuclear strikes in war game simulations

Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases

New Scientist

So the other AI companies are all good with lethal autonomy, if Anthropic is such an outlier? Super...

https://www.cnn.com/2026/02/24/tech/hegseth-anthropic-ai-military-amodei

Pentagon threatens to make Anthropic a pariah if it refuses to drop AI guardrails

Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to comply with demands to peel back safeguards on its AI model or risk losing a Pentagon contract.

CNN