Last week, popular stock trading app Robinhood revealed another huge data breach. Hackers stole five million customer names, two million customer email addresses, and a lot of more specific, valuable personal information from a smaller set of users. With these kinds of attacks becoming increasingly common, many are hoping that A.I. can play a role in bolstering their cyber defenses.
The good news is that A.I. is increasingly helping. Last week, at Fortune’s Brainstorm A.I. conference in Boston, I moderated a panel on A.I.’s role in cybersecurity with John Roese, the global chief technology officer at Dell, and Corey Thomas, the chariman and CEO at Rapid7, which sells cybersecurity software. Both Roese and Thomas said that A.I. is playing a key role now in helping to detect cyberattacks in most large organizations. Most of these are A.I. systems that learn what a company’s normal network activity looks like, and then detect activity that deviates from business-as-usual.
This kind of software represents a big advance from systems that were designed to simply keep the bad guys out of the network. Firewalls alone don’t cut in today’s world, where very sophisticated hacking tools are easily available to almost anyone on the dark web. So most companies are employing A.I.-based systems in addition to firewalls to try to detect attackers who get through those defenses.
But that’s where the good news from Roese’s and Thomas’s talk kind of ended. Roese said that the problem is, the bad guys are increasingly using A.I. too. Attackers automating the task of probing firewalls, searching for the right combination of attacks that will get through, and even using machine learning to compose more convincing phishing emails that will allow them to penetrate networks. Thomas noted that most of the A.I. being used by cybercriminals so far isn’t particularly sophisticated. But, he said, it doesn’t have to be. Often, simple methods work well. And as Roese noted, the attackers can try a lot of different attack combinations and only have to get it right once. The defenders have to get it right every time.
Another problem, according to both Roese and Thomas, is that while A.I. has made great inroads in detecting cyberattacks in the past few years, it is still very underutilized in preventing cyberattacks, by ensuring good cybersecurity practices are being followed, and in responding to cyberattacks once they are underway.
“Once that attack occurs and you are compromised, the speed in which you can respond today is primarily gated by human effort — which is not fast enough because the attack is definitely coming from something that’s enabled by machine intelligence, advanced automation,” Roese said.
Thomas noted that the easiest way to prevent cyberattacks is to just perform routine network maintenance, limit administrative access permissions, perform routine software updates, and regularly change passwords—all the kinds of cyber hygiene at which companies stumble. A.I. can help automate many of these processes, but so far few businesses are using it in this way.
Likewise, once an attack has been detected, speed is essential. And yet most companies, Roese said, still depend on human cybersecurity experts to figure out how to mitigate an attack. That needs to change, he said. There are many steps that can be automatically taken to contain a hack and even push the attacker out of the network, he said. The more sophisticated A.I.-enabled cybersecurity software—such as that sold by Rapid7, Darktrace, and Vectra—has this A.I.-enabled ability. But sometimes companies are reluctant to use it, Roese says, for fear that it will be triggered during false alarms, unnecessarily shutting down essential IT functions.
“I would say that there’s still a lack of trust, both on automation and A.I., for some of the operational challenges,” Thomas said.
What’s worse, A.I. that is being used to enable other key parts of a company’s business actually represents a great way for hackers to gain entry into and attack networks. Often, the A.I. systems have a lot of permissions to draw data and interact with other software across a network. They are, essentially, superusers, much like the human network administrators that are a favorite target of hackers. This is a great thing for an attacker, Roese said. He also said that if attackers are looking for high-value data to steal or, in the case of a ransomware attack, hold hostage, the data contained in trained A.I. algorithms represents some of the most expensive data, on a per bit basis, of any data in an organization, he said.
Right now, too few companies are thinking about how to secure these A.I. systems, he said.
Not to end on too much of a down note, there was some potential good news on A.I.’s application to cybersecurity last week. BT, the British telecom group, announced that its researchers had tested A.I.-enabled cybersecurity software that had been trained on epidemiological models of biological diseases. It can, according to BT, “automatically model and respond to a detected threat within an enterprise network.” The software, which BT calls Inflame, uses this model to “predict the next stages of an attack and rapidly identify the best response to prevent it from progressing any further.”
With that, here’s the rest of this week’s news in A.I. Thank you to my colleague and Fortune “Eye on A.I.” co-writer Jonathan Vanian for compiling the news, talent and “Fortune on A.I.” sections of the newsletter this week.
Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com
A.I. IN THE NEWS
Robo-mania. North American sales of robotics reached a record $1.48 billion for the first nine months of 2021, topping a record of $1.47 billion set during the first nine months of 2017, according to a report by The Wall Street Journal citing statistics from the Association for Advancing Automation trade association. “With labor shortages throughout manufacturing, logistics and virtually every industry, companies of all sizes are increasingly turning to robotics and automation to stay productive and competitive,” trade association president Jeff Burnstein said in a statement.
Splunk CEO waves goodbye. Data analytics and IT firm Splunk said that CEO Doug Merritt would step down and be replaced by company chair Graham Smith. Splunk investors were concerned about the sudden CEO departure, sending the company’s shares tanking 18% after the announcement.
Behold, the giant language models. Nvidia debuted the NeMo Megatron developer tools, which companies can use to train their own language models, used to understand and react to written and spoken languages. The developer tools are based on Nvidia’s Megatron large language model, a competitor to other giant A.I. language models like OpenAI’s GPT-3 and Google’s BERT software. Meanwhile, the U.K. government said it would investigate NVIDIA’s $40 billion takeover of British semiconductor giant ARM in order to probe potential “antitrust and security issues,” according to a report by The Financial Times.
Deep learning meets weather. Google’s A.I. unit published a blog post detailing its research into using deep learning to predict weather more accurately. Google researchers said that deep learning provides an alternative weather forecasting method to conventional forecasting systems that rely on supercomputers and “traditional physics-based techniques” that humans must program. Google’s deep learning weather forecasting system performed better than an existing forecasting system, the company said, and points toward a future of weather prediction systems that do “not rely on hand-coding the physics of weather phenomena” but instead simply ingest weather data to make their predictions. Google subsidiary DeepMind is also researching similar A.I.-powered weather forecasting systems.
EYE ON A.I. TALENT
Microsoft software development subsidiary GitHub chose Paige Bailey as director for data science and MLOps, which refers to machine learning operations. Bailey was previously the principal product manager of developer tools at Microsoft and a lead product manager at Google’s DeepMind research unit.
Databook, a startup specializing in sales software, hired Bruno Fonzi as vice president of engineering. Fonzi was previously a director of engineering at Salesforce.
The U.S. National Guard hired Martin Akerman as its first chief data officer, reported government news publication Nextgov. Akerman was a former data strategy officer of the U.S. Air Force.
EYE ON A.I. RESEARCH
Imagining disaster in order to avoid it. A problem with trying to use reinforcement learning, whereby an A.I. system learns from experience rather than from historical data, is that a bad decision in many real-world scenarios can be catastrophic. That's why reinforcement learning is mostly used to master video games or simulations, in which the consequences of getting it wrong aren't severe.
Now a group of researchers from China's Zheijang University and Huawei have proposed a system in which A.I. would learn from studying examples of when people decline to pursue an action because of its danger. After mastering the challenge of predicting when humans will believe an action is unsafe, using supervised learning, the system would then continue training using reinforcement learning. During this process, the A.I. would try to "imagine" the consequences of its actions (by projecting forward what it thinks is most likely to happen). If it determines that a human would likely block an action because it's unsafe, the technology would also block the action. The research, published in the non-peer reviewed research repository arxiv.org, could open the door to wider use of reinforcement learning in real-world situations.
FORTUNE ON A.I.
IBM debuts quantum machine it says no standard computer can match—By Jeremy Kahn
Bias in A.I. is a big, thorny, ethical issue—By Jonathan Vanian
The U.S. urgently needs an A.I. Bill of Rights—By Steve Ritter
How companies from FedEx to Intel are getting their A.I. projects to the finish line—By Anne Sraders
Rivian faces a tougher road to profitability than Tesla ever did, analysts warn—By Adrian Croft
BRAIN FOOD
Polyglots vs. bilingualism. Facebook researchers have shown that massive A.I. systems trained to translate different languages simultaneously can also translate better between any of the language pairs in their repertoire than smaller A.I. algorithms trained specifically for just two languages. The findings, published in the non-peer reviewed research repository arxiv.org, involved several different large A.I. systems that learned to translate between Czech, German, Icelandic, Japanese, Russian, Chinese, and the West African language Hausa. The company found that a neural network with nearly 4 billion variables outperformed other A.I. designs, including some that were supposed to more closely mimic how the brain works. (Neural networks in general are loosely based on the human brain, but only very loosely.)
The research is significant because it shows the extent to which large tech companies are increasingly turning to a few ultra-large A.I. systems to form the "foundation" on which they build a host of more narrow services, as opposed to training much smaller, narrower A.I. systems for each specific task. The fact that these large systems seem to perform better than narrow systems also has important implications for the democratization of A.I. Training and running these massive A.I. models is expensive, meaning that only tech giants will be able to afford to build and host them, making it hard for any other businesses to avail themselves of the same capabilities unless they buy them from prominent tech companies such as Google, Microsoft, OpenAI, or Baidu.
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.