‘Deep learning is a completely terrible idea for security,’ says cybersecurity expert

February 22, 2022, 11:23 PM UTC

Zulfikar Ramzan, the former chief technology officer for cybersecurity giant RSA, holds a controversial belief regarding artificial intelligence and security. 

“I think deep learning is a completely terrible idea for security,” Ramzan bluntly says. His statement flies in the face of conventional wisdom that neural networks and their ability to discover patterns in massive amounts of data will revolutionize the cybersecurity industry.

Ramzan, now the chief scientist and CEO of Aura Labs, concedes that “there’s always exceptions” and it’s possible that there are deep learning experts who are doing innovative work applying A.I. to security problems. But, deep learning still has some major issues that need to be addressed before he feels comfortable enough for the A.I. technique to be used more widely in the security industry.

For instance, Ramzan notes that researchers who were able to obtain major advancements in computer vision with deep learning had access to a wealth of labeled images and data. Security researchers may not have the same amount of clean data needed to train useful neural networks.

“You got to start off with good data; good data is your foundation,” Razman says. “You can’t make good wine from bad grapes or, you know, good beer from bad hops.”

He also believes that hackers could potentially exploit a major current problem with neural networks: researchers have trouble explaining how the software makes its decisions. Ramzan says if criminals were able to compromise the data used to train a deep learning system (sometimes referred to as data poisoning), they could subtlety influence the neural network without a company’s security team realizing the problem.

“I think in a world where you have adversaries, trying to do deep learning is very dangerous because it’s hard to understand what the model is doing,” Ramzan says.

Ramzan believes rule-based systems are easier to describe and debug than deep learning systems. 

He remembers an incident that happened a few years ago when an unnamed top distinguished software engineer from a “super well-known” company built an A.I. system designed to detect malware. The engineer didn’t realize, however, that the data he used for training the system contained timestamps indicating when each digital file was created. The problem was that the A.I. system began incorporating the timestamps into its computational analysis, despite the timestamps having no role in determining malware. 

“He collected data in a way that was kind of subtly biased,” Ramzan says. “There was an attribute in there that should not have been relevant.”

Ramzan was only able to spot the problem because he recognized that the datasets used to train the A.I. contained irrelevant attributes. 

If the model went into production and no one scanned the underlying datasets, the company could have been running an ineffective malware detection tool powered by A.I.

Ramzan isn’t completely opposed to A.I., and he noted that many security products incorporate more basic machine learning as opposed to the more cutting-edge deep learning technology that’s harder for researchers to explain.

But he opposes the idea of doing deep learning just for the sake of doing it when there may be more conventional techniques available that work just fine. When deep learning goes wrong, the consequences can be great.

“You can have a great tool, but if someone doesn’t know how to use it properly, it’s like giving somebody a bazooka and they can shoot themselves in the foot with it,” Ramzan says.

Jonathan Vanian 
@JonathanVanian
jonathan.vanian@fortune.com

A.I. IN THE NEWS

A.I.’s hard problems need money to address. Schmidt Futures, the philanthropic firm of Eric and Wendy Schmidt, debuted the AI2050 initiative that will invest $125 million over the next five years into researchers “working on key opportunities and hard problems that are critical to get right for society to benefit from AI”. “AI will cause us to rethink what it means to be human,” Eric Schmidt said in a statement. “As we chart a path forward to a future with AI, we need to prepare for the unintended consequences that might come along with doing so.”

A.I. can’t copyright works of art. A.I. systems that create works of art will be unable to copyright their masterpieces, according to a decision from the U.S. Copyright Office, as reported by The Verge. From the article: Last week, a three-person board reviewed a 2019 ruling against Steven Thaler, who tried to copyright a picture on behalf of an algorithm he dubbed Creativity Machine. The board found that Thaler’s AI-created image didn’t include an element of “human authorship” — a necessary standard, it said, for protection.

Here come the synthetic voices. iHeartMedia said it would use A.I. technology from the company Veritone to incorporate synthetic voices in various shows available on its iHeart Podcast Network. The synthetic voice technology “allows celebrities, athletes, influencers, broadcasters, podcasters, and other talent across numerous industries to securely create and monetize verified synthetic voices that can be transformed into different languages, dialects, accents and more,” the companies described in an announcement.

A new A.I. hub hits the scene. Underwriters Laboratories and Northwestern University have created an A.I. research hub intended to examine A.I. systems and their impact on society. The goal is for the hub to develop methods to “better incorporate safety and equity into the fast-growing technology,” according to a statement from the organizations.

EYE ON A.I. TALENT

Firebolt picked Mosha Pasumansky to be the data warehouse company's chief technology officer, technology news site CRN reported. Pasumansky was previously a principal engineer at Google as part of the cloud unit's BigQuery database tech unit. 

NetImpact Strategies, Inc. hired Naren Dasu to be the IT firm’s CTO. Dasu was previously worked on product management at Amazon Web Services VPN technology unit.

The Postal Regulatory Commission chose Russell Rappel-Schmid to be the postal service regulatory agency’s chief digital officer. Rappel-Schmid was previously the chief digital officer for the state of Alaska.

EYE ON A.I. RESEARCH

A.I. as a parking analyzer. Researchers from the University of Pittsburgh published a paper about using deep reinforcement learning—in which computers learn through repetition—to create an energy efficient parking monitoring system that only activates when needed. The authors said that most cameras used for video analytics do not need to be on all the time, so creating an A.I. system that allows them to only work as needed could result in “significant potential in reducing energy use.”

From the paper: In this paper, we considered a parking video analytics platform and proposed RL-CamSleep, a deep reinforcement learning-based technique that can improve the system’s overall energy savings while retaining its utility (in the form of accuracy). Our approach is orthogonal to existing work that focuses on improving hardware and software efficiency. We evaluated our approach on a city-scale parking dataset with diverse parking profile patterns.

FORTUNE ON A.I.

DeepMind A.I. helps control nuclear fusion reaction, potentially producing more energy—By Jeremy Kahn

A strategy to minimize bias in A.I. that any company can use—By Jonathan Vanian

America must win the race for A.I. ethics—By Will Griffin

Intel’s turnaround tale clashes with circumspect Wall Street—By Jacob Carpenter

There’s a huge surge in hackers holding data for ransom, and experts want everyone to take these steps—By Amiah Taylor

BRAIN FOOD

Is Canada losing its A.I. momentum? An opinion piece by A.I. researcher Blair Attard-Frost published by The Globe and Mail probes some criticism of Canada’s A.I. strategy. The author believes that while the country “was once a promising global leader in AI strategy, Canada has now fallen behind other countries and faces a labyrinthian mess of disjointed policies and programs.”

Several of the biggest developments in A.I. have stemmed from Canadian universities and researchers, with major tech companies setting up research hubs in the country to help them pursue their own A.I. projects and hire local talent.

Attard-Frost believes that Canada’s A.I. strategy is too fragmented and doesn’t possess “well-defined mechanisms for co-ordinating AI-related strategic planning, policy development, research and investment between departments and governments.”

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet