Brainstorm HealthBrainstorm DesignBrainstorm TechMost Powerful WomenCEO Initiative

Cybersecurity experts warn of A.I.’s drawbacks in combating threats

November 9, 2021, 10:37 PM UTC

Someday, artificial intelligence cybersecurity systems will be able to identify and eliminate threats at the drop of a black hat. Unfortunately for companies and governments, the human element is still preventing this from happening.

There are three parts of any security strategy. You want to be able to detect, to prevent, and to respond,” John Roese, Global Chief Technology Officer of Dell Technologies, said at the Fortune Brainstorm A.I. conference in Boston on Monday. “It turns out that in the ‘detect’ area, we’re well underway. If you’re using a security event information-management service or managed-security service provider, and they are not already using high degrees of advanced machine intelligence to detect threats, you already lost. The other two, however, are not in place yet. For instance, once that attack occurs and you are compromised, the speed in which you can respond today is primarily gated by human effort — which is not fast enough because the attack is definitely coming from something that’s enabled by machine intelligence, advanced automation.

“Candidly, we still have a lot of work to do in this regard, because I think we’ve over-rotated towards the detection model,” he added. “So, it’s a good news and a bad news story. We’re better at detecting and we’re probably moving at the same speed. But the response mechanisms, the reaction to them, are clearly not where they need to be today because they’re mostly driven by human effort.”

Corey Thomas, the Chairman and CEO of Rapid7, which deals in security analytics and automation, agreed. “If you look at what’s happening more broadly, it’s that most of cybersecurity, believe it or not, is still incredibly manual in orientation,” he said. “In a manual environment where we are massively resource-constrained and things are escalating, we have to actually get better at doing two things: one, automating more things, but also, two, getting comfortable with which things humans should make the decisions of and which things computers are better suited to make decisions of. I would say that there’s still a lack of trust, both on automation and AI, for some of the operational challenges.”

The large part of the problem, as both experts see it, is that attackers are using A.I. and automation on a less complex but still very effective scale that allows them to exploit flaws in security systems. 

“The level of automation is just pervasive,” said Roese. “The machine intelligence, the machine-learning technologies that allow them to process data to find vulnerabilities, that’s fairly well utilized. Full-blown, autonomous systems, not so much yet. And the main reason for that is the bad guys actually have all the time in the world. They just need to find one gap. They don’t need to respond at scale. You, on the other hand, have to react to every bad guy doing every theoretical attack, so it really has to be a mismatch on your side using automation and moving to more aggressive use of A.I. to automate the response processes, detection processes because of this mismatch of bad actors only having to find one vulnerability and you having to protect against everything.”

Though he warned that there’s not a “big A.I. hacking brain that actually makes all the decisions,” Thomas stressed that hackers are increasingly using automation to avoid detection from A.I. security systems. What’s worse is that they’re getting better and better at it, while their targets aren’t sufficiently improving their own protection measures. 

“The bad guys are crushing many of us in terms of automation,” he said. “They’re getting much, much better at using intelligent systems and A.I. to do reconnaissance, which allows them to narrow down targets very effectively. They’re usually using AI to decompose software to figure out where vulnerabilities exist extraordinarily effectively.”

When asked to offer advice at the conclusion of the event, Roese offered up a simple idea: “Don’t view A.I. in the security context as an added feature. You have to treat it as a core component of all things security, just like all things business process or all things application. Don’t compartmentalize it into a specialist team that, in isolation, deals with A.I. Develop and invest in the capability across the entire organization because it’s a tool, and if you don’t use it everywhere, you’re basically leaving something on the table.”

More finance coverage from Fortune:

Subscribe to Fortune Daily to get essential business stories straight to your inbox each morning.