To go off my previous post. I think testing AI is more than valid security testing. But in my mind red teaming is about testing defenders, detections, and improving response.
When targeting AI, that isn’t it. It’s not red teaming.
But I’m open to different opinions.
So, question if the day from me. Can you “red team” AI?
I’ve seen groups and people state that they red team AI. Is it more prompt injection? Are you actively helping defenders build and scale their defenses or test their responses? Is this more a pen test vs real red team?
There are other ways to setup your system for telemetry if you are looking to see what can avoid detection.
But if you want to test your latest hotness against prevention of code execution, definitely test it against WDAC.
Find something that gets around it? Now that’s useful.
WDAC will block everything you don’t trust, even to the point you could theoretically end up boot looping your Windows box if you’re trying to load untrusted drivers, or drivers you didn’t actually allow that you need.
Ask me how I know….
Since a lot of talk I’m seeing lately is about good defenses, especially for initial access, I’ve been preaching the good news about WDAC (formerly my fav name of Device Guard) for a while.
I think a properly set up WDAC is the bar which to test access - https://youtu.be/sWjhuVsSEks?si=eEgneEHooKwASTqZ

Loving the start of my day with an email from @ISC2 saying they are auditing submission that I uploaded (with a screenshot), for a total of 1 credit hour.
¯\_(ツ)_/¯
Enjoy