CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Using A.I to repel hackers is complicated. But these tricks can help

May 10, 2022, 9:27 PM UTC

A few months ago I talked to Zulfikar Ramzan, the former chief technology officer for cybersecurity firm RSA, about the problems of using deep learning for protecting corporate security networks and for other related cyber security tasks.

As he explained, the trendy A.I. technique may be ill-suited for cybersecurity for several reasons. Companies may lack enough clean data for training neural networks to recognize patterns in hacking attempts, for instance. Hackers could also compromise a company’s deep-learning powered security tool by “poisoning” the data that’s used to train it, which could make the tools ineffective. Additionally, A.I. researchers are often unable to explain how deep-learning systems reach conclusions, making troubleshooting A.I.-powered security tools a major problem, Ramzan noted. 

Last week, I chatted with Guy Caspi, the CEO of the security startup Deep Instinct, about his thoughts on deep learning and security. He disagreed with Ramzan’s comments about deep learning, which makes sense considering Caspi’s company uses the technology to power its security tools for companies.

“My religion is deep learning,” Caspi explained.

He agreed that most companies lack enough clean data to train neural networks that can recognize the behavior and patterns of malware inside IT networks. But this doesn’t have to be a bottleneck as long as businesses employ enough workers to label the data, which can be costly.

Deep Instinct has over 25 full-time employees who spend most of their time annotating data, Caspi said, underscoring the importance of the chore.

When it comes to hackers infiltrating a company’s network and introducing data into a deep-learning system that would cause it to malfunction, Caspi said that such a situation is unlikely because most companies developing deep-learning technology do their A.I. training offline within their own corporate data centers, he said. Thus, it would be “mission impossible,” he said, for a hacker to wreak havoc on the A.I. system.

“You have no connectivity to the net—you can’t break into the deep learning model,” Caspi said.

Still, there’s always a chance that someone from within the company could tamper with the deep-learning model. But Caspi explained that companies could eliminate the risk by only giving access to trusted employees.

As for being unable to explain how deep-learning powered-security tools make decisions, Caspi said it’s less of a problem than most people think. Companies can use statistical methods to understand how A.I. systems work, as my colleague Jeremy Kahn previously reported. While they aren’t perfect, they can provide a decent understanding, Caspi said. 

Ultimately, he agrees with some A.I. proponents who liken A.I’s  explainability problem to certain pharmaceutical drugs. Scientists can’t always explain how they treat disease, but there’s enough empirical evidence that shows they work.

“We may not know exactly all the features and everything that is running on the neural network, but at the end of the day, you know, we protect the world,” Caspi said.

I’d love to hear from readers about deep learning’s role in cybersecurity. Send me your thoughts.

Jonathan Vanian 
@JonathanVanian
jonathan.vanian@fortune.com

A.I. IN THE NEWS

Facial recognition gets some limits. Clearview AI will no longer be able to sell its database of about 20 billion photos of people’s faces as a result of a settlement with the American Civil Liberties Group, The New York Times reported. However, the startup, which built its database by scraping photos from the public Internet, can still sell its technology to government agencies. The ACLU filed a lawsuit against Clearview AI in May 2020 in the state of Illinois, alleging that the startup’s technology violated privacy laws in Illinois related to biometrics.

Hugs all around. The startup Hugging Face has raised $100 million in series C financing and now has a private valuation of $2 billion. The company has become popular with A.I. researchers who use its online repository to access machine learning models, datasets, and other useful tools for A.I. development. Lux Capital led the investment round, which also included new Hugging Face investors Sequoia Capital and Coatue Management. Other investors who participated included Betaworks, AIX Ventures, and basketball star Kevin Durant.

A.I. meets sound. Online audio company SoundCloud said it bought the machine learning startup Musiio for an undisclosed amount. Musiio developed technology that can analyze songs, identify patterns and characteristics, and automatically annotate them. SoundCloud said it will use Musiio’s machine learning technology to improve its “experience and help to identify talent and trends ahead of anybody else.” 

Microsoft heads to Louisville. The University of Louisville said it is now a Microsoft Academic Research Consultant, or MARC, and will help the technology giant research techniques to help “sift through large data sets and glean insights” and develop more sophisticated data training techniques. Other MARC members include Duke University, the University of Rochester, the University of Central Florida, the University of South Florida, Texas A&M, Oregon State University, and Washington University in St. Louis.

 

EYE ON A.I. TALENT

Apple’s director of machine learning Ian Goodfellow is leaving the iPhone maker, The Verge journalist Zoë Schiffer reported via Twitter. Goodfellow reportedly disagreed with Apple’s return-to-work policy, which requires certain employees to work from the company’s corporate office three days away. Goodfellow is major A.I. researcher whose work developing so-called generative adversarial networks helped lead to the rise of deepfakes, in which photos, audio, and videos appear to be real but are computer generated. He was a member of Fortune’s 40 under 40 list in 2019.

The Economist Group hired Michael Fleshman to be the media company’s chief technology officer, and Liz Goulding to be its chief product officer. Fleshman was previously the CTO of HOOQ, a joint-venture of Singtel, Warner Pictures, and Sony Pictures. Goulding was previously the group vice president of sports products at Discovery.

 

EYE ON A.I. RESEARCH

A.I. to analyze heart pumps. The Mayo Clinic said it presented new research at the Heart Rhythm Society conference in early May detailing how machine learning can be used to identify patients suffering from weak heart pumps. The clinic conducted their study on participants who used the Apple Watch to monitor their ECG signals, which were then analyzed by the researchers’ A.I. system to detect the health of their hearts.

“It is absolutely remarkable that AI transforms a consumer watch ECG signal into a detector of this condition, which would normally require an expensive, sophisticated imaging test, such as an echocardiogram, CT scan or MRI," Paul Friedman, the Mayo Clinic’s chair of its cardiovascular medicine department, said in a statement.

FORTUNE ON A.I.

Peter Thiel’s surveillance firm thinks the world ‘significantly underestimates’ the risk of nuclear conflict in Europe. It wasn’t the only eyebrow-raising quote in its shareholder letter—By Andrew Marquardt

Tech companies are slowing hiring or announcing layoffs. Is this the beginning of a cooler job market?—By Tristan Bove

Silicon Valley is no longer recruiting talent by city, but time zone—By Christiaan Hetzner

UberEats changed its app, and now walking couriers are out of luck—By Marco Quiroz-Gutierrez

BRAIN FOOD

Probing an A.I. disaster in the Netherlands. The technology publication IEEE Spectrum published a deep dive into the Dutch tax authorities’ use of a machine learning algorithm intended to identify people who may be committing tax fraud. Eventually, the machine learning system developed biases, attributing factors like low incomes, ethnicity, and whether someone was a Dutch citizen into deducing who was more likely to commit tax fraud. Because of the A.I. system, tax authorities “baselessly ordered thousands of families to pay back their claims, pushing many into onerous debt and destroying lives in the process.”

The Dutch tax disaster is now serving as a case study for European lawmakers developing the AI Act, intended to be the region’s most sweeping legislation intended to help regulate the use and development of artificial intelligence.

From the article: “If the AI Act had been put in place five years ago, I think we would have spotted [the tax algorithm] back then,” says Nicolas Moës, an AI policy researcher in Brussels for the Future Society think tank.

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.