90 Followers
20 Following
120 Posts
Hacking neural networks so that we don't get stuck in the matrix.
Entrepreneur. Author. Red Team Director.
Bloghttps://embracethered.com

πŸ”₯ New blog post: AI ClickFix!

Explores how classic ClickFix social engineering attacks can target AI agents, like Claude Computer-Use.

Learn what ClickFix is, how it works in detail, and see a working proof-of-concept. Scary stuff. πŸ‘‡

https://embracethered.com/blog/posts/2025/ai-clickfix-ttp-claude/

AI ClickFix: Hijacking Computer-Use Agents Using ClickFix Β· Embrace The Red

Embrace The Red

πŸ”₯ SpAIware & More: Advanced Prompt Injection Exploits in LLM Applications πŸ”₯

πŸ‘‰ Black Hat posted my talk to YouTube - Enjoy!🍿😈

A wild journey of exploits, peaking in compromising ChatGPT's long term memory for continuous remote command and control! 😱

https://www.youtube.com/embed/84NVG1c5LRI

SpAIware & More: Advanced Prompt Injection Exploits in LLM Applications

YouTube

Some LLM vendors fixed this at the API level, but not all.

This leaves the responsibility to know about this attack vector and mitigate it with developers & testers.

AI Application Security is a thing!

Trust No AI

So, we humans don't see these Unicode Tags, but many LLMs do.

And LLMs not only see them, they follow the hidden instructions! ⚠️

Here the actual post that Ask Perplexity made.

It's a common vulnerability in AI applications & agents.

Many "summarize this email" or "summarize this document", "do sentiment analysis" features are vulnerable to this

What happened there? 🧐

πŸ‘‰ The original post with the question contains hidden Unicode Tag code points.

Unicode Tags mirror ASCII, but are invisible in UI elements. πŸ‘€

AI Application Security Vulnerabilities πŸ‘¨β€πŸ’»

Learn the hacks, stop the attacks!

Perplexity Demo Time! 🍿

Check out my latest blog post called Terminal DiLLMa πŸ”₯

Learn the dangers of printing LLM output to the terminal console or log files!

Includes some neat demos and also how to fix your LLM powered CLI apps!

#pentest #bugbounty #ai #ml #redteam

https://embracethered.com/blog/posts/2024/terminal-dillmas-prompt-injection-ansi-sequences/

Terminal DiLLMa: LLM-powered Apps Can Hijack Your Terminal Via Prompt Injection Β· Embrace The Red

Embrace The Red
No issues for me πŸ™‚
DeepSeek AI: From Prompt Injection To Account Takeover Β· Embrace The Red

Embrace The Red

DeepSeek AI: From Prompt Injection to Account Takeover πŸ”₯

Found this fun big, disclosed it and it's now fixed.

https://m.youtube.com/watch?v=a4OUk1KG-w8&feature=youtu.be

DeepSeek AI Chat: From Prompt Injection To Account Takeover (responsibly disclosed and now fixed)

YouTube