Here’s the thing. Telling people to not use AI and it’s bad and everything is not going to prevent it’s “spread” and certainly won’t fix the problems we ( #infosec ) are seeing with it. So who is going to fix it? Well, we’re going to have to.

Major changes in our landscape - be they new tech that’s insecure, supply chain dependent flaws, and new attack techniques - are and have always been our job to fix. We resisted wireless ages ago, BYOD devices, rogue clients, and we found solutions. Ransomware and spear phishing also do have solutions. Of course I’m not saying we fixed all of that, but we did come up with layered solutions which we are implementing. Right now we’re trying to ensure MFA (or better than SMS for a second factor) is deployed, zero trust principles are in place, patches are applied quickly and on and on. We are doing that. WE. Not them.

We’re going to have to do the same thing with AI. There aren’t solid approaches that work universally yet, but we can start trying, experimenting, testing, and probing. Like we always do.

#security #AI

@simplenomad I think one of the things that is really hard about it, is that it is being thrown at so many different things. I mean how you handle AI for use as a coding assistant is going to be different from how you handle it as a document summarizer. And when the use case is everything on all of the platforms all of the time it becomes very difficult to think through. Which isn't to say it's insurmountable, just a problem requiring decomposition. I think the first part has to be sitting down with users and saying "We are going to set guidelines for one thing. Tomorrow we can go a different one, but this conversation is just about X." Sooner or later we can figure out the commonalities and start to generalize.