โšก Fresh Talk Alert for BSides Luxembourg 2026!

โ€œ๐—•๐—˜๐—ฌ๐—ข๐—ก๐—— ๐—ง๐—›๐—˜ ๐—ฃ๐—ฅ๐—ข๐— ๐—ฃ๐—ง: ๐—” ๐—™๐—ฅ๐—”๐— ๐—˜๐—ช๐—ข๐—ฅ๐—ž ๐—™๐—ข๐—ฅ ๐—”๐—š๐—˜๐—ก๐—ง๐—œ๐—– ๐—”๐—œ ๐—”๐—ง๐—ง๐—”๐—–๐—ž ๐—”๐—ก๐—— ๐——๐—˜๐—™๐—˜๐—ก๐—ฆ๐—˜ ๐—ฆ๐—ง๐—ฅ๐—”๐—ง๐—˜๐—š๐—œ๐—˜๐—ฆโ€ โ€“ ๐—๐—˜๐—ฅ๐—˜๐— ๐—ฌ ๐—ฆ๐—ก๐—ฌ๐——๐—˜๐—ฅ

As AI systems evolve into autonomous agents capable of executing code, calling APIs, and managing long-term memory, the attack surface extends far beyond prompt injection and jailbreaks. This AI Security Village session explores a full-stack approach to securing agentic AI systems.

Jeremy Snyder will break down how attackers target not just the LLM itself, but the broader agent architecture โ€” including tools, memory, workflows, and cross-system integrations. The session introduces a practical framework for assessing agent attack surfaces, validating outputs, enforcing constraints during system handoffs, and building more resilient AI-driven applications.

Jeremy Snyder is the founder and CEO of FireTail, an AI security platform focused on securing modern AI applications and autonomous systems.

๐Ÿ“… Conference Dates: 6โ€“8 May 2026 | 09:00โ€“18:00
๐Ÿ“ 14, Porte de France, Esch-sur-Alzette, Luxembourg
๐ŸŽŸ๏ธ Tickets: https://2026.bsides.lu/tickets/
๐Ÿ“… Schedule: https://hackertracker.app/schedule?conf=BSIDESLUX2026

#BSidesLuxembourg2026 #AISecurity #AgenticAI #LLMSecurity #CyberSecurity #AppSec #OWASP

โšก Fresh Talk Alert for BSides Luxembourg 2026!

โ€œ๐—˜๐—ฉ๐—˜๐—ฅ๐—ฌ ๐—š๐—จ๐—”๐—ฅ๐——๐—ฅ๐—”๐—œ๐—Ÿ ๐—˜๐—ฉ๐—˜๐—ฅ๐—ฌ๐—ช๐—›๐—˜๐—ฅ๐—˜ ๐—”๐—Ÿ๐—Ÿ ๐—”๐—ง ๐—ข๐—ก๐—–๐—˜: ๐——๐—˜๐—ฆ๐—œ๐—š๐—ก๐—œ๐—ก๐—š ๐—”๐—ก๐—— ๐—ง๐—˜๐—ฆ๐—ง๐—œ๐—ก๐—š ๐—š๐—จ๐—”๐—ฅ๐——๐—ฅ๐—”๐—œ๐—Ÿ๐—ฆ ๐—™๐—ข๐—ฅ ๐—Ÿ๐—Ÿ๐—  ๐—”๐—ฃ๐—ฃ๐—Ÿ๐—œ๐—–๐—”๐—ง๐—œ๐—ข๐—ก๐—ฆโ€ โ€“ ๐——๐—ข๐—ก๐—”๐—ง๐—ข ๐—–๐—”๐—ฃ๐—œ๐—ง๐—˜๐—Ÿ๐—Ÿ๐—”

Modern GenAI applications are no longer simple chatbots โ€” they involve complex chains of LLM calls, tools, and autonomous workflows. In this AI Security Village session, Donato Capitella explores why prompt-based guardrails alone are not enough and how security controls must be designed around the entire application workflow.

The talk focuses on practical strategies for designing and testing guardrails across multi-step LLM systems, including how data flows between chains, how permissions are enforced, and how applications can detect and respond to prompt attacks. Attendees will also see how these concepts can be tested in practice using spikee, an open-source tool built for testing LLM applications against prompt-based attacks.

Donato Capitella is a Principal Security Consultant at Reversec with extensive experience in offensive security and AI application testing. He is also the lead developer of the open-source project spikee.

๐Ÿ“… Conference Dates: 6โ€“8 May 2026 | 09:00โ€“18:00
๐Ÿ“ 14, Porte de France, Esch-sur-Alzette, Luxembourg
๐ŸŽŸ๏ธ Tickets: https://2026.bsides.lu/tickets/
๐Ÿ“… Schedule: https://hackertracker.app/schedule?conf=BSIDESLUX2026

#BSidesLuxembourg2026 #AISecurity #LLMSecurity #PromptInjection #CyberSecurity #OWASP #OpenSource #AppSec

โšก Fresh Talk Alert for BSides Luxembourg 2026!

โ€œ๐—ฆ๐—˜๐—–๐—จ๐—ฅ๐—œ๐—ง๐—ฌ ๐—™๐—ข๐—ฅ ๐—”๐—œ: ๐—”๐—œ๐——๐—ฅ ๐—•๐—”๐—ฆ๐—ง๐—œ๐—ข๐—ก ๐—”๐—ฆ ๐—ข๐—ฃ๐—˜๐—ก ๐—ฆ๐—ข๐—จ๐—ฅ๐—–๐—˜ ๐—Ÿ๐—Ÿ๐—  ๐—™๐—œ๐—ฅ๐—˜๐—ช๐—”๐—Ÿ๐—Ÿ / ๐—”๐—œ ๐—ฃ๐—ฅ๐—ข๐— ๐—ฃ๐—ง๐—ฆ ๐—ฅ๐—˜๐—ฉ๐—˜๐—ฅ๐—ฆ๐—˜ ๐—ฃ๐—ฅ๐—ข๐—ซ๐—ฌโ€ โ€“ Andrii Bezverkhyi

As AI adoption accelerates, so do the risks โ€” from prompt injections to malicious AI agents and adversarial abuse. This AI Security Village session explores AIDR Bastion, an open-source GenAI protection system designed to secure AI workloads through layered detection and prompt filtering.

The talk covers how AIDR Bastion acts as an LLM firewall and reverse proxy for AI prompts, using Sigma and Roota rules to detect malicious behavior, harmful content, prompt injection attacks, and AI-assisted malware generation. Attendees will also see how the system integrates with MITRE ATLAS, OWASP LLM Top 10 guidance, and existing detection engineering workflows.

Andrii Bezverkhyi is the founder of SOC Prime and a long-time contributor to the threat detection and cybersecurity community, known for projects such as Uncoder and DetectFlow.

๐Ÿ“… Conference Dates: 6โ€“8 May 2026 | 09:00โ€“18:00
๐Ÿ“ 14, Porte de France, Esch-sur-Alzette, Luxembourg
๐ŸŽŸ๏ธ Tickets: https://2026.bsides.lu/tickets/
๐Ÿ“… Schedule: https://hackertracker.app/schedule?conf=BSIDESLUX2026

#BSidesLuxembourg2026 #AISecurity #LLMSecurity #PromptInjection #OWASP #CyberSecurity #DetectionEngineering #OpenSource

โšก Fresh Village Alert for BSides Luxembourg 2026!

๐—”๐—œ ๐—ฆ๐—˜๐—–๐—จ๐—ฅ๐—œ๐—ง๐—ฌ ๐—ฉ๐—œ๐—Ÿ๐—Ÿ๐—”๐—š๐—˜ โ€“ ๐—ข๐—ฃ๐—˜๐—ก ๐—ฉ๐—œ๐—Ÿ๐—Ÿ๐—”๐—š๐—˜ / ๐—ค&๐—”
๐Ÿง  Interactive AI Security Playground โ€ข Live Demos โ€ข Hands-on Attacks โ€ข Real-Time Defense

Step into a live, open-floor AI Security Village dedicated to exploring the real-world security risks of Agentic AI, MCP architectures, LLM workflows, and autonomous systems. Unlike a traditional workshop or talk, this village is designed as a continuously running interactive environment where attendees can freely drop in, attack systems, observe defenses, and shape the direction of the sessions in real time.

Across two days, participants will interact with intentionally vulnerable AI systems, RAG pipelines, MCP servers, and autonomous agents while exploring attack paths such as prompt injection, goal hijacking, instruction manipulation, tool abuse, and trust boundary failures โ€” all aligned with the OWASP LLM Top 10 and AI Security Exchange guidance.

The village includes:
๐Ÿ”น Live exploitation of LLM and Agentic AI systems
๐Ÿ”น Interactive walkthroughs from organizers
๐Ÿ”น Real-time defensive patching and mitigation demos
๐Ÿ”น Hands-on labs with Dreadnode Crucible, Lakera Gandalf, and Agent Breaker
๐Ÿ”น Beginner-to-advanced learning paths running in parallel
๐Ÿ”น Community-driven Q&A and collaborative defense discussions

Parth Shukla is a Senior Security Researcher specializing in AI Security and Adversarial Machine Learning, focusing on the security architecture of Agentic Systems and LLMs. Joining him is Nagarjun Rallapalli, who focuses on automating security and building โ€” and breaking โ€” AI agents to test their limits.

๐Ÿ“… Conference Dates: 6โ€“8 May 2026 | 09:00โ€“18:00
๐Ÿ“ 14, Porte de France, Esch-sur-Alzette, Luxembourg
๐ŸŽŸ๏ธ Tickets: https://2026.bsides.lu/tickets/
๐Ÿ“… Schedule: https://hackertracker.app/schedule?conf=BSIDESLUX2026

#BSidesLuxembourg2026 #AISecurity #LLMSecurity #AgenticAI #OWASP #RedTeam #CyberSecurity #PromptInjection #MCP #AIVillage

โšก Fresh Talk Alert for BSides Luxembourg 2026!

๐—•๐—จ๐—œ๐—Ÿ๐——๐—œ๐—ก๐—š ๐—ง๐—›๐—˜ ๐—จ๐—Ÿ๐—ง๐—œ๐— ๐—”๐—ง๐—˜ ๐—”๐—œ ๐—™๐—œ๐—ฅ๐—˜๐—ช๐—”๐—Ÿ๐—Ÿ: ๐—œ๐—ก๐—ฆ๐—œ๐——๐—˜ ๐—ฆ๐—ข๐—ฉ๐—˜๐—ฅ๐—˜๐—œ๐—š๐—ก๐—ฆ๐—›๐—œ๐—˜๐—Ÿ๐——, ๐—œ๐—ก๐—ง๐—˜๐—ก๐—ง๐—ฆ๐—›๐—œ๐—˜๐—Ÿ๐——, ๐—”๐—ก๐—— ๐—Ÿ๐—ข๐—š๐—œ๐—–๐—ฆ๐—›๐—œ๐—˜๐—Ÿ๐—— โ€“ Mattijs Moens

As AI agents evolve into autonomous systems capable of executing code and interacting with APIs, traditional security controls are struggling to keep up. This AI Security Village session dives into the architecture behind the SovereignShield ecosystem โ€” a multi-layered framework built to secure modern AI applications against prompt injection, malicious actions, and data exfiltration.

The talk explores how LogicShield enforces semantic boundaries to stop jailbreaks and prompt attacks, how IntentShield audits outbound AI actions before execution, and how the unified SovereignShield Firewall combines both layers into a deterministic defense model for production AI systems.

Mattijs Moens is an AI security researcher and founder of SovereignShield, focused on building semantic firewalls for AI systems. He also contributes to the OWASP AI Security and Privacy Guide (AISVS).

๐Ÿ“… Conference Dates: 6โ€“8 May 2026 | 09:00โ€“18:00
๐Ÿ“ 14, Porte de France, Esch-sur-Alzette, Luxembourg
๐ŸŽŸ๏ธ Tickets: https://2026.bsides.lu/tickets/
๐Ÿ“… Schedule: https://hackertracker.app/schedule?conf=BSIDESLUX2026

#BSidesLuxembourg2026 #AISecurity #LLMSecurity #PromptInjection #OWASP #CyberSecurity #AIAgents

Releasing AgentGuard: architectural safety layer for AI agents.

Not prompt engineering. Code.

@protect
def delete_db(): ...

The LLM cannot call this. Ever. No prompt bypasses a raise.

Blocks: irreversible tool calls, prompt injection, context dilution, cross-agent contamination.

Rust core + pure Python fallback. 31/31 e2e tests with real Ollama.

https://github.com/psychomad/AgentGuard

"Don't blame the knife. Fix the architecture."

#InfoSec #LLMSecurity #AIAgents #PromptInjection #OpenSource #Rust

GitHub - psychomad/AgentGuard: Architectural safety layer for AI agents. Not prompt engineering โ€” code.

Architectural safety layer for AI agents. Not prompt engineering โ€” code. - psychomad/AgentGuard

GitHub

๐Ÿ”ด NEW: LLM Data Leaks: How AI Models Expose Your Secrets

LLMs are leaking secrets right now. Learn how training data extraction, prompt injection, and plugin flaws expose your data - and exactly how to stop it. Real CVEs, real incidents.

0:00 Intro
0:04 Cr

https://www.youtube.com/watch?v=oarkusORrQ4

#cybersecurity #LLMsecurity #AIdataleak #promptinjection #ChatGPTrisks #LLMdataleak #promptinjectionattack #AIsecurityrisks

LLM Data Leaks: How AI Models Expose Your Secrets

YouTube

๐Ÿง  Another Deep Dive into AI Security at BSides Luxembourg

๐—ง๐—›๐—˜ ๐—–๐—›๐—”๐—Ÿ๐—Ÿ๐—˜๐—ก๐—š๐—˜๐—ฆ ๐—ข๐—™ ๐—”๐—œ-๐—”๐—ฆ-๐—”-๐—ฆ๐—˜๐—ฅ๐—ฉ๐—œ๐—–๐—˜ ๐—Ÿ๐—ข๐—š๐—š๐—œ๐—ก๐—š โ€“ Jeremy Snyder

Dive into a critical 40-minute session uncovering one of the biggest blind spots in modern AI adoption. As organizations rapidly embrace AI-as-a-Service, most usage remains unmanagedโ€”creating โ€œShadow AIโ€ environments where traditional logging and security controls fall short.

This talk breaks down why existing logging approaches fail for LLM-driven systems, highlighting the disconnect between client-side and server-side visibility. Learn how to rethink logging strategies for AI, close detection gaps, and build centralized visibility that actually supports effective security monitoring and response in AI-driven environments.

Jeremy Snyder is the founder and CEO of FireTail, an AI security platform, with a background spanning cloud security, M&A at Rapid7, and over a decade in cyber and IT operations. His work focuses on securing modern API and AI ecosystems at scale.

๐Ÿ“… Conference Dates: 6โ€“8 May 2026 | 09:00โ€“18:00
๐Ÿ“ 14, Porte de France, Esch-sur-Alzette, Luxembourg
๐ŸŽŸ๏ธ Tickets: https://2026.bsides.lu/tickets/
๐Ÿ“… Schedule: https://hackertracker.app/schedule?conf=BSIDESLUX2026

#BSidesLuxembourg2026 #AISecurity #CloudSecurity #LLMSecurity #CyberSecurity #ThreatDetection

โšก Fresh Talk Alert for BSides Luxembourg 2026!

๐—ง๐—›๐—˜ ๐—”๐—š๐—˜๐—ก๐—ง ๐—›๐—”๐—— ๐—” ๐—ฃ๐—Ÿ๐—”๐—กโ€”๐—ฆ๐—ข ๐——๐—œ๐—— ๐—œ: ๐—ง๐—ข๐—ฃ ๐—”๐—ง๐—ง๐—”๐—–๐—ž๐—ฆ ๐—ข๐—ก ๐—ข๐—ช๐—”๐—ฆ๐—ฃ ๐—”๐—š๐—˜๐—ก๐—ง๐—œ๐—– ๐—”๐—œ ๐—ฆ๐—ฌ๐—ฆ๐—ง๐—˜๐— ๐—ฆ โ€“ Parth Shukla, Nagarjun Rallapalli

Dive into the evolving threat landscape of agentic AI in this hands-on 40-minute talk from the AI Security Village. Unlike traditional LLMs, AI agents operate across multiple steps, tools, and goalsโ€”introducing entirely new attack surfaces that defenders are only beginning to understand.

Through practical demos, this session exposes real vulnerabilities including goal hijacking, alignment faking, orchestration abuse, and covert data exfiltration. Learn how attackers manipulate agent behavior over time and how these risks impact modern AI systems, along with key takeaways to better secure agentic architectures.

Parth Shukla is a Senior Security Researcher specializing in AI Security and Adversarial Machine Learning, with a strong offensive security background. His work focuses on securing agentic systems and LLM architectures, bridging the gap between traditional AppSec and emerging AI risks.

Nagarjun Rallapalli is involved in advancing AI security initiatives and contributes to building and testing secure AI systems.

๐Ÿ“… Conference Dates: 6โ€“8 May 2026 | 09:00โ€“18:00
๐Ÿ“ 14, Porte de France, Esch-sur-Alzette, Luxembourg
๐ŸŽŸ๏ธ Tickets: https://2026.bsides.lu/tickets/
๐Ÿ“… Schedule: https://hackertracker.app/schedule?conf=BSIDESLUX2026

#BSidesLuxembourg2026 #AISecurity #AgenticAI #LLMSecurity #CyberSecurity #RedTeam

โš™๏ธ Technical Spotlight: New Session at BSides Luxembourg 2026

๐—›๐—ข๐—ช ๐—ฆ๐—˜๐—–๐—จ๐—ฅ๐—˜ ๐—œ๐—ฆ ๐—ฆ๐—˜๐—–๐—จ๐—ฅ๐—˜ ๐—–๐—ข๐——๐—˜ ๐—š๐—˜๐—ก๐—˜๐—ฅ๐—”๐—ง๐—œ๐—ข๐—ก? ๐—ฃ๐—จ๐—ง๐—ง๐—œ๐—ก๐—š ๐—ง๐—›๐—˜ ๐—Ÿ๐—Ÿ๐— ๐—ฆ ๐—ง๐—ข ๐—ง๐—›๐—˜ ๐—ง๐—˜๐—ฆ๐—ง โ€“ Melissa TESSA

A sharp 5-minute lightning talk challenging the assumptions behind AI-assisted coding. As developers increasingly rely on LLMs, this session exposes how โ€œsecure-by-designโ€ claims often break under realistic conditions.

Through adversarial testing and real research insights, discover how LLMs can introduce hidden risksโ€”from fragile evaluation methods to slopsquatting attacks via hallucinated package names. A must-see for anyone building or securing modern software with AI in the loop.

Melissa TESSA is a doctoral researcher at the University of Luxembourgโ€™s SnT, working at the intersection of AI, software engineering, and cybersecurity. Her research focuses on enabling large language models to generate secure codeโ€”and uncovering where they fail.

๐Ÿ“… Conference Dates: 6โ€“8 May 2026 | 09:00โ€“18:00
๐Ÿ“ 14, Porte de France, Esch-sur-Alzette, Luxembourg
๐ŸŽŸ๏ธ Tickets: https://2026.bsides.lu/tickets/
๐Ÿ“… Schedule: https://hackertracker.app/schedule?conf=BSIDESLUX2026

#BSidesLuxembourg2026 #AISecurity #LLMSecurity #SecureCoding #CyberSecurity #AI