Debates around the effectiveness of phishing simulations are widespread. Supporters claim they can boost learning retention rates, help train users’ instincts, reduce risk, and contribute to developing a ‘security-first’ culture.
Detractors point to tick-box compliance, fatigue, unfair and unethical lures, punishing users who ‘fail’ phishing tests (e.g., extra-dull mandatory training, naming and shaming, disciplinary measures), and focusing on failure rather than success.
In fact, two recent (2021 and 2025) studies suggest that phishing training makes no significant difference to susceptibility, and could, counter-intuitively, make users more susceptible (although there are some important caveats to this).
But phishing is often the most prevalent entry mechanism for attackers. It’s cheap and easy, and generative AI may make it even easier. And threat actors know it works. So is there a way to make phishing exercises effective?