Dave Ward

@ExileDev8668
1 Followers
1 Following
3 Posts

I've spent over a decade securing privileged access for organisations that can't afford to get it wrong.

My specialism is CyberArk. Vault architecture, IAM, PAM and AI security.

https://www.linkedin.com/in/dave-ward-17030278?utm_source=share_via&utm_content=profile&utm_medium=member_android

Linkedinhttp://bit.ly/41kW6XP
BlueSkyhttps://bit.ly/exiledev8668

JIT access reduces privileged access windows from "always on" to "30 minutes when needed." CyberArk Secure Cloud Access and Microsoft PIM both support this: request access for a defined period, auto-revoke when it closes.

The resistance is cultural, not technical. Administrators accustomed to permanent access see request workflows as overhead. That overhead is the security model working as designed.

#ZeroTrust #JustInTimeAccess #PAM

In hybrid environments, a single on-prem service account can authenticate to Azure AD, trigger an API call to AWS, and access a database in a third environment through trust relationships nobody mapped when the integrations were built.

Discovery exercises routinely find accounts created by staff who left years ago, running domain admin rights because that was quickest. Credentials in config files on shared drives, authenticating every few minutes.

#ServiceAccounts #PAM #CloudSecurity

Meta's HyperAgents paper: AI agents that rewrite their own approach based on what worked, develop persistent memory of target environments, and transfer meta strategies to new attack surfaces.

PAM session management assumes a human. Credential rotation assumes human timelines. Machine identity governance hasn't accounted for identities that autonomously evolve their behaviour.

https://arxiv.org/abs/2603.19461

#AI #CyberSecurity #PAM #MachineIdentity

Hyperagents

Self-improving AI systems aim to reduce reliance on human engineering by learning to improve their own learning and problem-solving processes. Existing approaches to self-improvement rely on fixed, handcrafted meta-level mechanisms, fundamentally limiting how fast such systems can improve. The Darwin Gödel Machine (DGM) demonstrates open-ended self-improvement in coding by repeatedly generating and evaluating self-modified variants. Because both evaluation and self-modification are coding tasks, gains in coding ability can translate into gains in self-improvement ability. However, this alignment does not generally hold beyond coding domains. We introduce \textbf{hyperagents}, self-referential agents that integrate a task agent (which solves the target task) and a meta agent (which modifies itself and the task agent) into a single editable program. Crucially, the meta-level modification procedure is itself editable, enabling metacognitive self-modification, improving not only the task-solving behavior, but also the mechanism that generates future improvements. We instantiate this framework by extending DGM to create DGM-Hyperagents (DGM-H), eliminating the assumption of domain-specific alignment between task performance and self-modification skill to potentially support self-accelerating progress on any computable task. Across diverse domains, the DGM-H improves performance over time and outperforms baselines without self-improvement or open-ended exploration, as well as prior self-improving systems. Furthermore, the DGM-H improves the process by which it generates new agents (e.g., persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. DGM-Hyperagents offer a glimpse of open-ended AI systems that do not merely search for better solutions, but continually improve their search for how to improve.

arXiv.org