
A one-prompt attack that breaks LLM safety alignment | Microsoft Security Blog
As LLMs and diffusion models power more applications, their safety alignment becomes critical. Our research shows that even minimal downstream fine‑tuning can weaken safeguards, raising a key question: how reliably does alignment hold as models evolve?





