The Register: Researchers find hole in AI guardrails by using strings like =coffee. “Large language models frequently ship with “guardrails” designed to catch malicious input and harmful output. But if you use the right word or phrase in your prompt, you can defeat these restrictions.”

https://rbfirehose.com/2025/11/17/the-register-researchers-find-hole-in-ai-guardrails-by-using-strings-like-coffee/

The Register: Researchers find hole in AI guardrails by using strings like =coffee | ResearchBuzz: Firehose

ResearchBuzz: Firehose | Individual posts from ResearchBuzz