#Microslop keeps failing to build confident in #Copilot. Another #EchoLeak has been found. odysee.com/@SomeOrdinar...

MICROSLOP COPILOT GOT HACKED
MICROSLOP COPILOT GOT HACKED

Odysee
Mermaid図経由でCopilotから情報流出 - 間接プロンプト注入型攻撃の新たな形 - Qiita

はじめに 最近、脆弱性に関する記事が注目されることが多くなった @___nix___ です。 背景 Microsoft 365 Copilot は業務データにアクセスできるAIとして普及していますが、自律的にコンテンツを解析する性質が悪用されるリスクがあります。過去の「...

Qiita

Hidden commands in emails can trick Microsoft Copilot into stealing your company's secrets—no clicks required. The EchoLeak flaw was patched, but it's a wake-up call about AI security. #Microsoft #CyberSecurity #AI #EchoLeak

https://pupuweb.com/trust-microsoft-copilot-data-0-click-echoleak-flaw/

Can You Truly Trust Microsoft Copilot with Your Data? An Essential Breakdown of 0-click EchoLeak Flaw - PUPUWEB

Many businesses now use artificial intelligence tools like Microsoft Copilot to help with daily tasks. These tools are built into familiar programs like

PUPUWEB

⚠️ How secure are the #AI systems used by your company? #EchoLeak has really been putting a spotlight on this issue—and the reality is that AI tools pose many risks to businesses. 😨

"AI tools are increasingly being embedded deep into business infrastructure — often alongside 'vague policies' or 'limited visibility into how they process and store data'", says Robert Rea, chief technical officer at #Graylog.

Kate O'Flaherty spoke about this challenge with several industry experts, in addition to Graylog's Robert Rea, including:
👉 Lillian Tsang at Harper James
👉 Emilio Pinna at SecureFlag
👉 Joseph Thompson at Birketts LLP
👉 Sam Peters at ISMS.online

Learn about what it means to be risk aware when it comes to AI tools, and how to strengthen the AI governance strategies at your org. 👇

https://www.isms.online/cyber-security/echoleak-are-firms-complacent-about-the-risks-posed-by-ai/ #cybersecurity #artificialintelligence #AIsecurity #AItools #agenticAI

No Click. No Warning. Just a Data Leak.

Think your AI assistant is secure? Think again. The new EchoLeak exploit shows how Microsoft 365 Copilot, and tools like it, can silently expose your sensitive data without a single user interaction. No clicks. No downloads. Just a well-crafted email.

In this eye-opening blog, we break down how EchoLeak works, why prompt injection is a growing AI threat, and the 5 actions you need to take right now to protect your organization.

Read now: https://www.lmgsecurity.com/no-click-nightmare-how-echoleak-redefines-ai-data-security-threats/

#AIDataSecurity #Cyberaware #Cyber #SMB #Copilot #AI #GenAI #EchoLeak #PromptInjection #MicrosoftCopilot #Cybersecurity #CISO #ITsecurity #InfoSec #AISecurityRisks

No-Click Nightmare: How EchoLeak Redefines AI Data Security Threats | LMG Security

Is your AI assistant leaking data? New EchoLeak attack exploits Copilot with zero clicks. We share the details and tips to boost your AI data security.

LMG Security

Pierwsza podatność typu „zero-click” w Microsoft 365 Copilot. Dane wyciekały bez ingerencji użytkowników.

Badacze z Aim Security odkryli pierwszą podatność „zero-click” w agencie sztucznej inteligencji (AI). Microsoft 365 Copilot można było zmusić agenta do wyciekania poufnych danych organizacji bez ingerencji przez użytkownika – ofiarę. Wystarczyła wysyłka jednego, odpowiedniego spreparowanego maila. Podatność otrzymała oznaczenie CVE-2025-32711 z oceną CVSS 9.3 (krytyczna) i została ochrzczona przez...

#WBiegu #Ai #CPIA #Echoleak #Leak #Markdown #PromptInjection #Wyciek

https://sekurak.pl/pierwsza-podatnosc-typu-zero-click-w-microsoft-365-copilot-dane-wyciekaly-bez-ingerencji-uzytkownikow/

Pierwsza podatność typu "zero-click" w Microsoft 365 Copilot. Dane wyciekały bez ingerencji użytkowników.

Badacze z Aim Security odkryli pierwszą podatność „zero-click” w agencie sztucznej inteligencji (AI). Microsoft 365 Copilot można było zmusić agenta do wyciekania poufnych danych organizacji bez ingerencji przez użytkownika – ofiarę. Wystarczyła wysyłka jednego, odpowiedniego spreparowanego maila. Podatność otrzymała oznaczenie CVE-2025-32711 z oceną CVSS 9.3 (krytyczna) i została ochrzczona przez...

Sekurak

Can Your AI Be Hacked by Email Alone?

No clicks. No downloads. Just one well-crafted email, and your Microsoft 365 Copilot could start leaking sensitive data.

In this week’s episode of Cyberside Chats, @sherridavidoff and @MDurrin discuss EchoLeak, a zero-click exploit that turns your AI into an unintentional insider threat. They also reveal a real-world case from LMG Security’s pen testing team where prompt injection let attackers extract hidden system prompts and override chatbot behavior in a live environment.

We’ll also share:

• How EchoLeak exposes a new class of AI vulnerabilities
• Prompt injection attacks that fooled real corporate systems
• Security strategies every organization should adopt now
• Why AI inputs need to be treated like code

🎧 Listen to the podcast: https://www.chatcyberside.com/e/unmasking-echoleak-the-hidden-ai-threat/?token=90468a6c6732e5e2477f8eaaba565624
🎥 Watch the video: https://youtu.be/sFP25yH0sf4

#EchoLeak #Cybersecurity #AIsecurity #Microsoft365 #Copilot #PromptInjection #CISO #InsiderThreats #GenAI #RiskManagement #CybersideChats

Nu har den kommit, den första sårbarheten i Copilot som kan användas för att genom att skicka ett mail extrahera känslig information från en organisation.

Mer information om sårbarheten echoleak (CVE-2025-32711) finns här:
https://www.aim.security/lp/aim-labs-echoleak-m365

#Sårbarhet #echoleak #Copilot #AI

Aim Labs | Echoleak M365

The first weaponizable zero-click attack chain on an AI agent, resulting in the complete compromise of Copilot data integrity

EchoLeak - der "Dosenöffner" für KI‑Sicherheitsrealitäten!

Es war nur eine Frage der Zeit – und hier ist sie: eine Zero‑Click‑Attacke auf ein KI-System wurde Realität. Die Schwachstelle, bekannt als EchoLeak, nutzt nur eine einzige manipulierte E‑Mail – kein Klick, kein Download, keine Warnung – und Copilot exfiltriert heimlich sensible Unternehmensdaten. #CyberSecurity #AIsecurity #Copilot #Microsoft365 #EchoLeak #ZeroTrust #Cybercrime

LLM based agents cannot be secured if you 1) give them access to private data, 2) let them read untrusted content, and 3) allow them to communicate with the external world.

In his new blog post, Simon Willison writes a sharp & easy to understand summary about this "lethal trifecta for AI agents": https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

M365 #Copilot hacks like #EchoLeak are nasty. But as Simon points out, once you combine different AI tools and #MCP to build your own agents, securing agents gets even harder.