I recently realized that the 'autonomous SOC' idea is the same old snake oil packaged up with a new name. It's just a fancier and more intellectual-sounding version of the 'technology can prevent attacks / stop breaches' claim that has been disproven over and over (a close cousin of claims that compliance can do the same).
(The term has bothered me since I first heard it, but I hadn't thought about it deep enough to see this until I was writing up this antipattern for the Zero Trust SecOps playbook).
If a security team believes this, they have to believe that attackers are cardboard cutouts that do exactly the same attacks every time and will miraculously give up (and open a fruit stand?) if defenders just buy and implement the right tool(s). People that believe this are also effectively saying that leaders can replace security people/salaries with a one-time purchase of tooling (a common misperception many already have).
I understand that people are excited by AI technology because it is very powerful and has a lot more ongoing potential to automate wasted repetitive human effort (just like SOAR and previous generations of automation tech did).
It will change how people do their job, but it won't replace a human or automate the whole job. SecOps/SOC jobs are some of the least likely to be fully automated because they face the full brunt of creative intelligent human attackers finding ways to get around any defense. No matter how well we automate what we do today, the attackers are paid to find some way around it by finding biases, oversights, seams, etc. in our preventions, detections, and response/recovery automation.
I have been trying to think about why people may believe this (and why it took me so long to see it myself).
So far, my best guesses are:
▪️ It appeals to the hope that we may finally 'win' the security battle against the attackers
▪️ The 'autonomous' sounds intellectual or technical, like it's been thought through or validated
▪️ We technologists have seen tech replace some legacy jobs over time, seen how repetitive some SOC work is, and wonder 'could it really happen?'
What are your thoughts here?
(The term has bothered me since I first heard it, but I hadn't thought about it deep enough to see this until I was writing up this antipattern for the Zero Trust SecOps playbook).
If a security team believes this, they have to believe that attackers are cardboard cutouts that do exactly the same attacks every time and will miraculously give up (and open a fruit stand?) if defenders just buy and implement the right tool(s). People that believe this are also effectively saying that leaders can replace security people/salaries with a one-time purchase of tooling (a common misperception many already have).
I understand that people are excited by AI technology because it is very powerful and has a lot more ongoing potential to automate wasted repetitive human effort (just like SOAR and previous generations of automation tech did).
It will change how people do their job, but it won't replace a human or automate the whole job. SecOps/SOC jobs are some of the least likely to be fully automated because they face the full brunt of creative intelligent human attackers finding ways to get around any defense. No matter how well we automate what we do today, the attackers are paid to find some way around it by finding biases, oversights, seams, etc. in our preventions, detections, and response/recovery automation.
I have been trying to think about why people may believe this (and why it took me so long to see it myself).
So far, my best guesses are:
▪️ It appeals to the hope that we may finally 'win' the security battle against the attackers
▪️ The 'autonomous' sounds intellectual or technical, like it's been thought through or validated
▪️ We technologists have seen tech replace some legacy jobs over time, seen how repetitive some SOC work is, and wonder 'could it really happen?'
What are your thoughts here?
