Discovered a new hateful thing about #phishingsimulation today; having to whitelist hundreds of 'malicious' domains on multiple layers of URL filtering so when a user clicks on a link they can get the 'you did bad' message and have their infraction reported. Some of my org's web filtering layers don't support whitelisting, e.g. the UK National Cyber Security Centre's Protective DNS service. What do we do about that? Well we just accept that some clicks will be blocked and not reported, which will further skew the already highly dubious click-rate metrics. Which makes me wonder, why are we gathering them in the first place?