I shared a post on the OpenAI forum about using Pomerium to secure remote MCP servers with Zero Trust. It’s all open source. We show how to expose MCP servers (like GitHub or Notion) to LLM clients without leaking tokens or re-implementing OAuth.

Happy to chat more over a coffee chat or podcast!

https://community.openai.com/t/zero-trust-architecture-for-mcp-servers-using-pomerium/1288157 #ztna #security #mcp #agenticai

Zero Trust Architecture for MCP Servers Using Pomerium

We’ve been building some open-source tools to make it easier to run remote MCP servers securely using Zero Trust principles — especially when those servers need to access upstream OAuth-based services like GitHub or Notion. Pomerium acts as an identity-aware proxy to: Terminate TLS and enforce Zero Trust at the edge Handle the full OAuth 2.1 flow for your MCP server Keep upstream tokens (e.g., GitHub, Notion) out of reach from clients Our demo app (MCP App Demo) uses the OpenAI Responses API...

OpenAI Developer Community

"As AI agents powered by Large Language Models (LLMs) become increasingly versatile and capable of addressing a broad spectrum of tasks, ensuring their security has become a critical challenge. Among the most pressing threats are prompt injection attacks, which exploit the agent’s resilience on natural language inputs — an especially dangerous threat when agents are granted tool access or handle sensitive information. In this work, we propose a set of principled design patterns for building AI agents with provable resistance to prompt injection. We systematically analyze these patterns, discuss their trade-offs in terms of utility and security, and illustrate their real-world applicability through a series of case studies."

https://arxiv.org/html/2506.08837v2

#AI #GenerativeAI #LLMs #PromptInjection #AIAgents #AgenticAI #CyberSecurity

Design Patterns for Securing LLM Agents against Prompt Injections

AI Agent Complete My One Month Coding Assignment In A Few Minutes

More than 20 years ago, I recall doing a coding assignment for my Natural Language Processing paper at uni, I pick this task to write an Eliza program using C++, together with Flex and Bison Parser…

Tech AI Chat

Agentic AI marks a turning point.
It’s not just smart — it acts independently.
Are we evolving, or being engineered out?

#PostHumanShift #AIandSociety #AgenticAI #DigitalDisplacement #TheInternetIsCrack #FediTech

🔥 AI used to talk — now it acts.

Ever wonder why LLMs couldn't "do stuff" before?

Here’s the DevOps-level hack powering real-world automation: agents.

🤖💡 Cloud-native meets AI ops 👇

Watch now: https://youtube.com/shorts/zlbcp2cjASY

#AI #DevOps #LLM #CloudNative #AgenticAI #MLOps

Before you continue to YouTube

"AI agents have already demonstrated that they may misinterpret goals and cause some modest amount of harm. When the Washington Post tech columnist Geoffrey Fowler asked Operator, OpenAI’s ­computer-using agent, to find the cheapest eggs available for delivery, he expected the agent to browse the internet and come back with some recommendations. Instead, Fowler received a notification about a $31 charge from Instacart, and shortly after, a shopping bag containing a single carton of eggs appeared on his doorstep. The eggs were far from the cheapest available, especially with the priority delivery fee that Operator added. Worse, Fowler never consented to the purchase, even though OpenAI had designed the agent to check in with its user before taking any irreversible actions.

That’s no catastrophe. But there’s some evidence that LLM-based agents could defy human expectations in dangerous ways. In the past few months, researchers have demonstrated that LLMs will cheat at chess, pretend to adopt new behavioral rules to avoid being retrained, and even attempt to copy themselves to different servers if they are given access to messages that say they will soon be replaced. Of course, chatbot LLMs can’t copy themselves to new servers. But someday an agent might be able to.

Bengio is so concerned about this class of risk that he has reoriented his entire research program toward building computational “guardrails” to ensure that LLM agents behave safely."

https://www.technologyreview.com/2025/06/12/1118189/ai-agents-manus-control-autonomy-operator-openai/

#AI #GenerativeAI #AIAgents #AgenticAI #CyberSecurity #LLMs #Chatbots

Are we ready to hand AI agents the keys?

We’re starting to give AI agents real autonomy, and we’re not prepared for what could happen next.

MIT Technology Review

𝑴𝑨𝑪𝑯 (https://lnkd.in/gpbBbEJs) gave us the blueprint for composable digital experiences.

But as #AI evolves, so must the architecture. I’m proposing a new paradigm: 𝑴𝑨𝑷𝑺, built for the era of #AgenticAI:
https://www.linkedin.com/posts/shishs_how-a-tech-start-up-tackles-legacy-systems-activity-7339048155579150338-wN-J?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAA-mBkBD-GU_2lDGUH3NzxTnUUduSl5dLM

The only #agenticAI I want is the #AI #agent that files a lawsuit on my behalf against the companies operating the AI.

Bonus if it also barfs out the evidence against them.