Nicole Eagan believes a robot uprising draws nigh.
As the chief executive of Darktrace, a cybersecurity “unicorn,” or private firm valued at more than $1 billion, Eagan helps companies spot intruders in corporate networks, quarantine them, and defend data. The British firm’s technology uses machine learning techniques to gain an understanding of the internal state of customers’ networks and then watches for telltale deviations from the norm that may indicate foul play.
While Darktrace uses A.I. techniques for defense, the company anticipates that thieves and spies will soon catch up. “I expect that we’re going to see artificial intelligence used by the attackers,” says Eagan, noting that there already have been “early glimpses” of that future coming to pass.
“It’s going to become A.I. against A.I. It’s going to become a full-on war of algorithms,” Eagan says.
Darktrace last year warned an A.I. committee organized by Britain’s House of Lords that A.I.-aided attackers could learn to imitate people’s writing styles in order to craft more effective phishing attacks, phony messages that aim to dupe their recipients. Dave Palmer, director of technology Darktrace and author of the parliamentary submission, noted at the time that even hackers with no understanding of A.I. techniques could get up to speed and cause havoc in a matter of months.
Eagan’s fears are shared by her peers. A Dec. 2017 survey of 400 corporate cybersecurity professionals commissioned by Webroot, another Internet security business, found that 91% of respondents reported concern that hackers will use A.I. and machine learning techniques against companies.
“I like to call it the era when the machines are going to fight back,” Eagan says.