The benefits of ‘shallow’ artificial intelligence

May 4, 2021, 3:31 PM UTC

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

The conventional wisdom about artificial intelligence is that bigger is better.

Consider neural networks, the software used for training algorithms that can discover patterns within information that humans may miss. Researchers are creating increasingly huge versions that can analyze massive amounts of data, and thus do things like generating more realistic text in response to a query.

But these enormous neural networks come with some costs, including making it difficult for researchers to figure out how and why the software makes its predictions and decisions. When these neural networks become so big, researchers can get lost attempting to make sense of the billions of interconnected calculations taking place.

That’s partly why financial giant FICO uses so-called shallow neural networks in some of its products, including one that’s used to identify credit card fraud, said FICO chief analytics officer Scott Zoldi. 

A shallow neural network is a small neural network with only a few layers in which calculations take place. By contrast, the neural networks that help power the language systems of Google and the OpenAI research lab contain many more layers that can accommodate and analyze much more data.

Regarding credit card fraud, Zoldi said that there’s less need for companies to use big neural networks because researchers already know what bad behavior—the kind that indicates fraud—looks like. For instance, if an individual steals the credit card of someone living in California and uses that card to buy a lawnmower in Texas just a few days later, researchers can tell that the transaction is likely fraudulent because it’s similar to other popular scam purchases. They didn’t need a giant neural network to tip them off.

In the case of identifying credit card fraud, Zoldi said that shallow neural networks along with more conventional statistical techniques work just fine. The benefit is that FICO can more easily tell regulators how their A.I. system functions because it’s easier to probe smaller neural networks.

That doesn’t mean that FICO isn’t exploring the use of big neural networks. Zoldi said that the company has several deep learning projects in progress as research. He said that giant neural networks will be important for security researchers as new forms of online payment systems become more popular, including ones related to cryptocurrencies. Because these payment systems are so new, security researchers may need help from the big neural networks to identify new fraud patterns that they may not be familiar with.

Zoldi acknowledged that for A.I. researchers, the use of shallow neural networks isn’t particularly cutting edge or prestigious. But he said he has found a way to respond to critics who say his team is failing to push A.I.’s frontiers.

“When I respond about explainability and responsibility then they usually go away,” Zoldi joked.

Jonathan Vanian 


France’s future surveillance state. France is interested in using A.I. and related algorithms to monitor the Internet for possible signs of terrorist activity, The Wall Street Journal reported. A proposed government bill would call for telecoms to monitor web pages in real time on behalf of the country, ultimately leading to A.I. systems that “would alert intelligence officials when certain criteria are met, such as an internet user visiting a specific sequence of pages.”

Self-driving cars will need better “brains.” Volkswagen Group intends to design its own specialized computer chips to power its planned self-driving cars, the company told the German business publication Handelsblatt, per auto publication Motor Authority. The move would be akin to how Tesla designs its own computer chips and then outsources their production to Samsung, the report said.

Big bucks for A.I.-powered drug discovery. Exscientia, a startup specializing in using A.I. to aid the drug discovery process, has closed a $225 million funding round, with SoftBank, via the company’s second Vision Fund, providing an additional $300 million to be used “at Exscientia’s discretion.” Other investors that participated in the startup’s latest funding include Novo Holdings, Mubadala Investment Company, Farallon Capital, GT Healthcare Capital, and Bristol-Myers Squibb.

Big Problems in an A.I. dungeon. Latitude, the publisher of a popular game AI Dungeon, introduced a system that “stops the game from generating sexual content involving minors” after players enter certain prompts, gaming publication Polygon reported. AI Dungeon became popular for its entertaining procedural story generator, built using the GPT-3 language model created by OpenAI. From the report: According to a statement from Latitude, the system cast a far wider net than anticipated, sometimes blocking the procedural generation of stories involving children or anything related to specific phrases like “five years old.”

Musicians unite against a Spotify A.I. patent. A collation of over 180 musicians and human rights groups are urging Spotify to never use, sell, license, or monetize a technology patent that its creators claim can analyze a person’s voice and recommend music based on a person’s "emotional state, gender, age, or accent." The non-profits Fight for the Future and Access Now are sending a letter to Spotify with support from Rage Against the Machine’s Tom Morello, rapper Talib Kweli, musician Laura Jane Grace of the band Against Me!, and Sadie Dupuis of the band Speedy Ortiz.


Apple hired Samy Bengio to lead a new A.I. research unit, Reuters reported. Bengio’s new A.I. unit will be under the purview of John Giannandrea, the company’s senior vice president of machine learning, the report said. Bengio was previously a research scientist at Google, but he left the search giant after two leading A.I. researchers, Timnit Gebru and Margaret Mitchell, were ousted by the company.

ABBYY picked Paul Nizov to be the enterprise technology company’s chief information security officer. Nizov was previously a managing director of cybersecurity for Ernst & Young for the Middle East and North Africa region. chose Tara Janke to be the startup’s chief technology officer. Janke was previously the vice president of product development at Promethean.


A.I. hype meets reality. Although there’s been a lot of hype around the possibility that deep learning will supercharge radiology, leading to widespread use of A.I. to analyze medical images, the reality is far more subdued. Researchers at the Data Science Institute for the American College of Radiology published a paper in the Journal of the American College of Radiology about radiologists and their use of A.I. and discovered that there is only a “modest penetrance of AI in clinical practice.”

According to the survey, about 34% of radiologist respondents said they currently use A.I. in their clinical practices, with large practices using the technology more than smaller ones. About 94% of the A.I. users said that “the performance of AI in their practice was inconsistent” and that only 5.7% said that the technology “always works.”

“When asked what makes AI inconsistent, respondents reported that patient, scanner, and conspicuity bias probably all play a role in making AI inconsistent,” the authors wrote.

From the paper:

When asked what the ACR Data Science Institute should do on behalf of radiologists to improve the potential of AI in medical imaging, more than 60% responded that the Data Science Institute should provide methods for evaluating and reporting the performance of AI algorithms on representative image data sets, with an equal number suggesting they would like a method to evaluate an AI algorithm on their own data before purchase.


A.I.-driven cybersecurity firm Darktrace rises nearly 40% in London debut—By Siva Sithraputhran

How Snowflake wants to seed the cloud with more startups—By Aaron Pressman

How to take data privacy back from the ‘tech gorillas’—Tom Chavez

Facebook is still taking shots at Apple—By Danielle Abril

Companies enlist vaccine whisperers to convince skeptical workers to get their jabs—By  Jonathan Vanian


A.I. just dominated humans in a crossword tournament. An A.I. system dubbed Dr. Fill bested humans by recently winning the American Crossword Puzzle Tournament, as described in an article published in Slate. Matt Ginsberg, the creator of Dr. Fill, said he originally built the technology because he “sucked at crosswords, and it just pissed me off.”

The article describes how Ginsberg improved upon an older version of Dr. Fill by using neural networks, the same underlying software that DeepMind researchers used to create their AlphaGo system that defeated humans in the Chinese board game Go. What’s interesting about the Dr. Fill system is that it is essentially a combination of older and less fashionable A.I. techniques, often referred to as symbolic A.I., and newer machine learning. This is noteworthy because in A.I., researchers are often divided about which technique will lead to more powerful systems.

From the article:

Ginsberg sees the new Dr. Fill as a marriage between two unlikely and often battling partners: good old-fashioned A.I. and modern machine learning. “Those two groups have historically not played well together,” he said. “They don’t like each other. Everybody has this huge bias that they’re going to use one approach and not the other, and it’s been bad.” But playing nice has its benefits. “As a scientist, I am incredibly excited to see these two communities finally working together to solve problems that were too hard for them individually.”

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet