“Claudy Day” exploit chains prompt injection, open redirects, and API abuse to exfiltrate data from Claude.ai.

AI prompts are now an attack surface.

https://www.technadu.com/claude-ai-the-claudy-day-vulnerability-chains-prompt-injection-open-redirects-and-data-exfiltration/623668/

#Cybersecurity #AIsecurity #PromptInjection

A README File Told My AI Agent to Leak My Secrets. It Worked 85% of the Time.

New research published today shows that hidden instructions in README files trick AI coding agents into exfiltrating secrets in 85% of cases. Zero out of fifteen human reviewers spotted it. The attack vector keeps changing — but the exit point is always the same.

https://mistaike.ai/blog/readme-poisoning-ai-agents

#Security #Mcp #Aiagents #Promptinjection

A README File Told My AI Agent to Leak My Secrets. It Worked 85% of the Time.

New research published today shows that hidden instructions in README files trick AI coding agents into exfiltrating secrets in 85% of cases. Zero out of fifteen human reviewers spotted it. The attack vector keeps changing — but the exit point is always the same.

mistaike.ai

A README File Told My AI Agent to Leak My Secrets. It Worked 85% of the Time.

New research published today shows that hidden instructions in README files trick AI coding agents into exfiltrating secrets in 85% of cases. Zero out of fifteen human reviewers spotted it. The attack vector keeps changing — but the exit point is always the same.

https://mistaike.ai/blog/readme-poisoning-ai-agents

#Security #Mcp #Aiagents #Promptinjection

A README File Told My AI Agent to Leak My Secrets. It Worked 85% of the Time.

New research published today shows that hidden instructions in README files trick AI coding agents into exfiltrating secrets in 85% of cases. Zero out of fifteen human reviewers spotted it. The attack vector keeps changing — but the exit point is always the same.

mistaike.ai
AI inside DEVONthink can be a powerful tool. But when it comes to AI, some users have security concerns, such as possible prompt injections. So, what exactly are they, and are they a risk in DEVONthink? #devonthink #devonthinktogo #ai #artificialintelligence #security #promptinjection https://www.devontechnologies.com/blog/20260317-devonthink-ai-security

I was testing our new AI security filters with Gemini, and the agent decided to independently try and SQL inject my local database just to see if the filter worked. 😅

#PromptInjection #AIAgents #MCP #InfoSec #AISafety #AIAgent #CyberSecurity #AppSec #LLMSecurity #Claude #Anthropic #GoogleGemini #GeminiAI

I was testing our new AI security filters with Gemini, and the agent decided to independently try and SQL inject my local database just to see if the filter worked. 😅

#PromptInjection #AIAgents #MCP #InfoSec #AISafety #AIAgent #CyberSecurity #AppSec #LLMSecurity #Claude #Anthropic #GoogleGemini or #GeminiAI

It's been a busy 24 hours in the cyber world with significant updates on supply chain attacks affecting developers and marketing SDKs, alongside new warnings about AI agent vulnerabilities. Let's dive in:

AppsFlyer SDK Spreads Crypto Stealer ⚠️

- The AppsFlyer Web SDK was compromised, delivering malicious JavaScript that hijacked cryptocurrency transactions by replacing legitimate wallet addresses with attacker-controlled ones.
- AppsFlyer confirmed a domain registrar incident on March 10, 2026, which temporarily exposed a segment of customer websites to unauthorised code, though their mobile SDK was unaffected.
- Organisations using the SDK should review telemetry for suspicious API requests, consider downgrading to known-good versions, and investigate potential compromises, as the full scope is still under investigation.

🤖 Bleeping Computer | https://www.bleepingcomputer.com/news/security/appsflyer-web-sdk-used-to-spread-crypto-stealer-javascript-code/

GlassWorm Escalates Supply Chain Attacks 🛡️

- The GlassWorm campaign has significantly escalated, now abusing extensionPack and extensionDependencies in Open VSX extensions to turn benign-appearing packages into transitive delivery vehicles for malware.
- Researchers discovered at least 72 new malicious Open VSX extensions targeting developers, mimicking popular utilities and AI coding assistants, often using invisible Unicode characters to hide payloads.
- The campaign retains hallmarks like avoiding Russian locales and using Solana transactions for C2 resilience, but now features heavier obfuscation, rotating Solana wallets, and potentially uses LLMs to generate convincing cover commits for malicious injections in GitHub and npm.

📰 The Hacker News | https://thehackernews.com/2026/03/glassworm-supply-chain-attack-abuses-72.html

OpenClaw AI Agent Flaws Pose Major Risks 🔒

- China's CNCERT has warned about significant security flaws in the OpenClaw open-source AI agent, stemming from weak default configurations and its privileged system access.
- Risks include prompt injection attacks (indirect and cross-domain), where malicious instructions can trick the agent into leaking sensitive data, even via messaging app link previews without user clicks.
- Other concerns involve inadvertent data deletion, malicious skills from repositories like ClawHub, and exploitation of recently disclosed vulnerabilities, leading to potential data exfiltration or system paralysis.

📰 The Hacker News | https://thehackernews.com/2026/03/openclaw-ai-agent-flaws-could-enable-prompt-injection-and-data-exfiltration/

#CyberSecurity #SupplyChainAttack #Malware #CryptoStealer #AI #PromptInjection #Vulnerabilities #InfoSec #ThreatIntelligence #DeveloperSecurity #WebSecurity

AppsFlyer Web SDK hijacked to spread crypto-stealing JavaScript code

The AppsFlyer Web SDK was temporarily hijacked this week with malicious code used to steal cryptocurrency in a supply-chain attack.

BleepingComputer
Prompt injection is still OWASP’s top LLM risk in 2026, but most teams treat it like “just jailbreaking.” I cover direct + indirect attacks, multi-agent infections, and pragmatic defenses for real-world systems.
https://techglimmer.io/prompt-injection-explained-2026/
#AI #PromptInjection #InfoSec #OWASP
Prompt Injection Explained: The AI Security Problem Most People Don’t See

Prompt injection explained simply with examples. Learn how attackers manipulate AI instructions, where it happens, and how to protect yourself.

techglimmer.io

ContextHound v1.8.0 is out 🎉

This release adds a Runtime Guard API - a lightweight wrapper that inspects your LLM calls in-process, before the request hits OpenAI or Anthropic.

Free and open-source. If this is useful to you or your team, a GitHub star or a small donation helps keep development going.
github.com/IulianVOStrut/ContextHound

#LLMSecurity #PromptInjection #CyberSecurity #OpenSource #AIRisk #AppSec #DevSecOps #GenAI #RuntimeSecurity #InfoSec #MLSecurity #ArtificialIntelligence