Jerrad Dahlager

13 Followers
14 Following
16 Posts
Cloud Security Architect | Adjunct Instructor | Writing about cloud security for the curious 🐱 | CISSP | CCSP | MN Sports ⚾ | nineliveszerotrust.com

I deployed Microsoft Entra Prompt Shield end-to-end and tested it against real jailbreak payloads across supported AI traffic, including ChatGPT and Gemini in my lab.

Prompt Shield inspects AI traffic at the network layer using TLS inspection and conversation schemes, allowing adversarial prompts to be blocked before they reach the model while clean traffic passes through transparently.

Instead of building defenses into every application independently, you can apply one policy across multiple AI services. That’s a meaningful step toward giving security teams better visibility into AI usage.

I published the full deployment, testing, and results in my blog below:

https://nineliveszerotrust.com/blog/prompt-shield-network-ai-gateway/

#AISecurity #PromptInjection #ZeroTrust #MicrosoftEntra #CloudSecurity

Block Prompt Injection at the Network Layer with Entra Prompt Shield

Deploy Microsoft Entra Internet Access Prompt Shield to block prompt injection and jailbreak attacks at the network layer before they reach the AI model. Full hands-on lab with TLS inspection, conversation schemes for ChatGPT/Claude/Gemini/Deepseek, and a comparison with app-level LLM firewalls.

I built a custom Microsoft Sentinel data connector with no Azure Functions, Logic Apps, or compute costs.

Custom connector development in Sentinel has always meant wiring up a DCE, DCR, a custom table, an Entra app, a client secret, and RBAC just to start ingesting data. CCF Push mode changes that. With one click in the Data Connectors gallery, Sentinel provisions all of that for you.

I tested it with abuse.ch Feodotracker, a free feed tracking live botnet C2 infrastructure. No synthetic data, no toy examples.

- 4 JSON connector artifacts and a Python sender using OAuth 2.0
- 5 KQL analytics rules mapped to MITRE ATT&CK
- 5 hunting queries and a Sentinel workbook
- GitHub Actions ingestion every 6 hours at zero compute cost

The detection I'm most proud of is a KQL rule that joins C2 IPs against your live network traffic to answer one question: is any device in my environment talking to a confirmed botnet C2 server?

If you're still using the legacy Data Collector API, heads up. It retires September 14, 2026. CCF Push is the path forward and honestly a massive improvement.

Everything is open source. Fork the repo, add your credentials, and you have a working threat intelligence connector.

Blog: https://lnkd.in/g5R_gee8
Repo: https://lnkd.in/g5whjgPw

Microsoft warned about OAuth redirect abuse on March 2, 2026. This isn't credential theft or classic token theft by itself. It weaponizes Entra ID error handling.

An attacker registers an OAuth app with a malicious redirect URI, sends a crafted login.microsoftonline.com link designed to fail, and Entra ID's 302 redirect lands the victim on a phishing page or malware dropper. The sign-in fails and the attacker still wins.

I built a detection and hardening kit you can deploy to an existing Sentinel workspace:

• 4 analytics rules: consent after risky sign-in, suspicious redirect URIs, OAuth error clustering, bulk consent

• 5 hunting queries: permissions baseline, non-corporate IP auth, high-privilege apps, URI inventory, token replay

• 1 workbook: OAuth Security Dashboard
Entra hardening: verified-publisher consent restriction, MFA policy for risky OAuth sign-ins

• OAuth app audit: flags suspicious redirect URIs and overprivileged permissions across app registrations

Blog post: https://nineliveszerotrust.com/blog/oauth-redirect-abuse-sentinel/

Companion lab on GitHub: https://github.com/j-dahl7/oauth-redirect-abuse-sentinel

#MicrosoftSentinel #EntraID #DetectionEngineering #OAuth #IdentitySecurity #BlueTeam

Detecting OAuth Redirect Abuse with Microsoft Sentinel and Entra ID

Microsoft warned about OAuth redirect abuse enabling phishing and malware delivery. Build Sentinel analytics rules, hunting queries, a security workbook, and Entra ID hardening policies to detect and prevent this technique in your tenant.

Microsoft is rolling out two Entra ID changes this spring that take effect automatically.

Passkey profiles move to GA in March. Tenants that do not opt in will be auto-migrated starting in April (through late May for Worldwide, late June for GCC/GCC High/DoD). If attestation is disabled, synced passkeys become allowed by default, meaning credentials can sync via iCloud Keychain and Google Password Manager without an explicit decision to allow synced passkeys.

Conditional Access is closing an enforcement gap starting March 27. Policies targeting "All resources" with resource exclusions will now enforce on sign-ins where apps request only OIDC or limited directory scopes. These flows were previously not being evaluated..

I published a breakdown covering:

• Auto-migration logic and default configuration behavior
• PowerShell scripts to audit your tenant
• A three-profile passkey architecture for role-based separation
• How to identify affected Conditional Access policies
• Key gotchas (silent campaign shifts, retroactive AAGUID removal, destructive preview opt-out)

The post includes links to MC1221452, the Microsoft Tech Community announcement, and the relevant Microsoft Learn documentation.

https://nineliveszerotrust.com/blog/entra-march-2026-passkeys-ca/

#EntraID #Identity #ZeroTrust #Passkeys #ConditionalAccess #CloudSecurity #MFA

Azure PIM solves just-in-time access for humans. I wanted to bring that same pattern to non-human identities.

PIM handles just-in-time access for humans. For non-human identities like AI coding agents, backup automation, and CI/CD pipelines, it breaks down. Service principals can’t activate PIM roles, so they end up with standing permissions they use for minutes per day.

A backup job running at 2 AM has Key Vault access around the clock. An AI agent deploying infrastructure has permanent Contributor for a 10-minute task. That’s a lot of unnecessary exposure.

So I built a Zero Standing Privilege gateway where I use an Azure Function that brokers access for service principals and other NHIs. The caller requests access through an API, receives a scoped role assignment for a short window, and a Durable Functions timer revokes it automatically. Everything is logged for a full audit trail.

The write-up includes the full architecture and a working lab with Bicep, PowerShell, and Python.

https://nineliveszerotrust.com/blog/zero-standing-privilege-azure/

#ZeroTrust #Azure #CloudSecurity #IdentitySecurity #EntraID #DevSecOps #IAM #AIAgents

Since AWS re:Invent, I've been exploring patterns for securing LLM-integrated applications. Prompt injection remains a top concern, and OWASP ranks it #1 (LLM01) in their Top 10 for LLM Applications.

In my latest blog post, I walk through building a serverless edge prompt filter (API Gateway → Lambda → DynamoDB) that sits between users and your LLM backend. Think WAF-style first-pass filtering for LLM inputs:

• Detects instruction overrides and common jailbreak patterns
• Flags or blocks PII in prompts
• Logs all detections to DynamoDB for analysis and trending

This complements managed controls such as Bedrock Guardrails by enabling fast pattern matching at the edge before deeper semantic analysis. Designed as one layer in a defense-in-depth architecture.

Full post + hands-on Terraform lab: https://nineliveszerotrust.com/blog/llm-prompt-injection-firewall/

What tools or patterns are you using to protect against prompt injection?

#AISecurity #AWS #CloudSecurity #LLM #PromptInjection #GenAI #OWASP

Happy New Year! 🎉

Microsoft's Sentinel MCP Server went GA on November 18, 2025.

MCP (Model Context Protocol), an open standard from Anthropic, enables AI agents to query your Sentinel data lake using natural language. The AI generates KQL for you behind the scenes. The CSA tracked more than 16,000 MCP servers within 8 months of release. Adoption is outpacing security guidance.

The attack surface is real. Sentinel logs contain attacker-influenced fields like email subjects, command lines, and user agents. When AI processes this data, prompt injection becomes possible. The Supabase MCP incident demonstrated this exact pattern.

It's Microsoft-hosted, but you own the risk. You configure identity, client environment, and monitoring. Data returned is scoped to the caller's permissions.

Full walkthrough (setup, vectors, hardening):

https://nineliveszerotrust.com/blog/sentinel-mcp-server-security/

#MicrosoftSentinel #AISecurity #MCP #ZeroTrust #InfoSec #CyberSecurity #PromptInjection

For the last couple of weeks, I've been deep diving into container supply chain security.

I built a full GitHub Actions demo pipeline:

• Vulnerability scanning

• SBOM generation

• Keyless signing + attestations

• SLSA build provenance

The stack: Trivy, Syft, Cosign, and Sigstore.

Zero long-lived secrets. GitHub Actions uses OIDC to obtain a short-lived certificate, signs the image (and publishes attestations), and records everything in a public transparency log. No keys to rotate or leak.

The post also covers hardened base images (distroless and Docker's new Hardened Images) and how to enforce signatures on the consumer side with Kubernetes admission policies.

Blog + companion repo to fork: https://lnkd.in/gtdNYWW8

#SupplyChainSecurity #SBOM #Sigstore #GitHubActions #DevSecOps #ZeroTrust

Mic-E-Mouse: Repository Containing Implementations and experiments related to the Mic-E-Mouse side-channel attack(s)
https://github.com/AICPS/Mic-E-Mouse
GitHub - AICPS/Mic-E-Mouse: Repository Containing Implementations and experiments related to the Mic-E-Mouse side-channel attack(s).

Repository Containing Implementations and experiments related to the Mic-E-Mouse side-channel attack(s). - AICPS/Mic-E-Mouse

GitHub

A common Terraform misconception: sensitive redacts output, not state.

The `sensitive = true` flag only redacts CLI/UI output. If Terraform needs the value to manage infrastructure, that value can still be stored (and is retrievable) in state files and saved plan files so treat those artifacts as sensitive.

Terraform 1.11 adds write-only arguments (where supported by your provider/resource), so you can pass secrets to managed resources without persisting those values in Terraform plan/state artifacts.

Here’s how it works:
• Pass secrets to `_wo` arguments (e.g., `secret_string_wo`)
• Terraform sends the value to the provider during the run
• Terraform discards the value afterward (rotate via a companion `_wo_version` argument)

I wrote a hands-on guide with AWS + Azure examples, plus a companion lab repo so you can see the difference yourself.

If you’ve ever wondered, “Wait… are my secrets actually in that state file?” now you know and now you can fix it.

Great work, HashiCorp.

https://nineliveszerotrust.com/blog/terraform-secrets-write-only/

#Terraform #CloudSecurity #DevSecOps #AWS #Azure #InfrastructureAsCode