Weekly analysis at the intersection of artificial intelligence and industry.

Is this email not displaying correctly?
View it in your browser.


follow
Subscribe
Send Tip
January 11, 2022

A few years ago, Andrew Moore, the former dean of Carnegie Mellon University’s computer science department who now is the head of A.I. and industry solutions for Google’s Cloud business, noticed that many of the company’s manufacturing customers wanted to use Google’s computer vision algorithms to inspect products rolling off their assembly lines for defects.

But they had a problem. In most well-run factories, defects occurred fairly infrequently and not all the anomalies were the same. This meant that most of Google’s customers didn’t have enough data to train a good defect detector. What if, Moore mused, Google, with its vast data and computing resources, could build a universal defect detector, one that could find problems with any item, with little or no specific training?

At first, Moore and his team weren’t quite sure this would be possible. But they set out to try anyway. And, after feeding an algorithm millions of images of different kinds of industrial products, some with defects and some without, they succeeded in creating a system that can reliably detect defects in almost any physical, manufactured product. The system is good enough to work reasonably well, even on a type of product it hasn’t seen before, without any additional training. And if the customer gives it just 10 examples of the kinds of defects it wants the system to find, it works even better. (Google is not the only A.I. provider to have built such a system. It’s rivals at Microsoft Azure cloud service also offer one, as do at least half a dozen other A.I. software firms.)

To Moore’s mind, this is an example of the kind of system that will allow A.I. to impact businesses and industries that are not as data-rich and digitized as the ones for which the technology has had the greatest impact to date. The universal defect detector is a kind of “foundational” algorithm, to use a phrase that has become popular among some A.I. researchers. It is a core building block that can be used to underpin a lot of different, specific uses.

In this way, it is somewhat similar to the very large language algorithms, such as Open A.I.’s GPT-3 or Google’s BERT, that can be easily adapted to many different specific natural language tasks with just a little bit of fine-tuning. Moore is among those who thinks that the A.I. used by businesses will increasingly be composed of these basic building block algorithms. Most companies would never have the data and computing resources to train one of these foundational models from scratch themselves. But they will be able to build increasingly sophisticated A.I. systems by using these basic building blocks.

Moore is quick to acknowledge, however, that using these pre-built building blocks comes with some big potential ethical pitfalls. In fact, he says that most of Google’s customers are not as focused as they should be on the risks. The fundamental problem is a company using a foundational algorithm it didn’t build itself will likely have limited, or possibly no, insight into the data that was used to train that software. So it is quite possible that a foundational algorithm will be riddled with hidden biases—perhaps mislabeling pictures of Black people as “gorillas,” as Google’s photo-labelling algorithm infamously once did.

For now, Moore says, the best safeguard is very careful human review. It is up to people to ask tough questions about the ethics of how the system is going to be used and also to think hard about both the abuse of such a system and about what the unintended consequences might be. This needs to be combined with careful testing to find the system’s biases and potential failure points.

At Google’s Cloud service, Moore says, there are two different A.I. ethics review committees, composed of engineers, managers, and social scientists. They go by beverage-inspired monikers: “Iced Tea” and “Lemonaid.” (No one at Google Cloud could tell me exactly why it decided to use that spelling instead of the more conventional one, although Moore said he thought both terms were acronyms, with the obscure full names of the committees having now been forgotten.) Iced Tea examines the foundational building blocks that Google Cloud is considering developing that could be used by a wide variety of Cloud customers. For instance, Google Cloud was considering building an algorithm to detect if factory workers are wearing proper safety equipment. That sounds like a good, ethical use case. But could the same software, Moore asks, violate worker privacy or be used by management to try to block unionization efforts? Could it be adapted by some repressive regime to monitor public spaces and track dissidents? Because of these potential negative dual uses, Google has been wary of developing such software.

Lemonaid, Moore says, is similar but is screens bespoke A.I. systems that Google Cloud’s customers want the tech giant to build for them. For instance, one customer was interested in building a system to monitor the audio of those dialing into customer support centers and determine how stressed the caller was based on their breathing rate and voice tone. Moore said Google steered clear of that project because there was too much danger of potentially discriminatory racial or ethnic bias in such software.

In either case, it is human intelligence—and in fact the intelligence of an entire committee of humans—that is necessary to keep A.I. on the straight and narrow. “I’m really angry if I see anyone saying, ‘oh, yeah, there’s an automatic system which can check for this sort of problem,’” Moore says. “We’re nowhere near that.”

With that, here’s the rest of this week’s A.I. news.

Jeremy Kahn 
@jeremyakahn
jeremy.kahn@fortune.com


.


.

A.I. IN THE NEWS


A burrito with a side of A.I. Fast food chain Chipotle is using machine learning to monitor how much guacamole it is using, and predict how much it will need in the future, according to an interview its CEO, Brian Niccol, gave to The Washington Post. Niccol also said the company was using A.I. to help automate scheduling interviews for job applicants.


Softbank invests in robo advisor Qraft. Masayoshi Son's tech giant is investing $146 million into Korea-based Qraft, a company that uses A.I. to pick stocks, The Wall Street Journal reported. Qraft already manages $1.7 billion for various Asian banks and insurance companies through a series of A.I.-powered exchange traded funds, mostly listed in the U.S. The company plans to expand its offerings to more asset managers in the U.S. and China.


The U.S. military is increasingly concerned about the threat that small consumer drones pose. That's according to stories in both The Financial Times and The Wall Street Journal. The military is experimenting with a variety of ways to try to disable hobbyist drones—the kind anyone can buy at Walmart and that can be modified to carry explosives or scout for later attacks—but top officials say there is likely to be no silver bullet solution to the problem.


Startup gets its biological "brains on chips" to play Pong. Cortical Labs, an Australian startup that I wrote about in 2020, has succeeded in teaching its cyborg-like system that combines human biological neurons with software to play the game Pong. Nikkei Asia said Cortical's mini-brains could pave the way for other biological-silicon syntheses that might create far more energy-efficient A.I.


EYE ON A.I. TALENT


Intel has hired Jeff Wilcox to be Intel Fellow, design engineering group chief technology officer for client SoC (system-on-a-chip) architecture, according to tech publication The Register. Wilcox had been an engineer at Intel before going to Apple where he headed its efforts to build its own custom computer chips, including the M1 chip series and the T2 chip.  


Databricks, a San Francisco-based company whose technology helps companies organize large pools of data for A.I., has appointed Naveen Zutshi as its chief information officer, the company said. Zutshi was most recently CIO at cybersecurity firm Palo Alto Networks


Torch.AI, a company in Leawood, Kan., that makes A.I.-enabled data processing software, has hired Adam Lurie as its chief strategy officer, according to trade publication Datamation. Lurie was previously president of the federal solutions division for the Exiger, a New York-based company that sells compliance software. 


EYE ON A.I. RESEARCH


DeepMind's AlphaFold almost nailed the shape of Omicron's tricky spike protein. Last year, I wrote a lot about how DeepMind's algorithm, AlphaFold, which can take the DNA sequence of a protein and predict that protein's shape with remarkable accuracy, is beginning to reshape biological research. Now Wired's Tom Simonite details how Colby Ford, a computational genomics researcher at the University of North Carolina, used AlphaFold and another free protein-structure prediction algorithm from scientists at the University of Washington, RoseTTAFold, to predict the structure of Omicron's mutated spike protein. Using the A.I. software, Ford was able to race ahead of scientists who were using traditional experimental methods, such as powerful electron microscopes, to verify its actual structure. From the predictions, Ford was able to say, several weeks before experimental biologists largely verified his predictions, that existing antibodies, created either through natural infection or from vaccination, would still likely work against Omicron.


As Simonite writes, A.I.-based predictions are unlikely to completely replace the need for experimentally-verified results. But the A.I. algorithms have already had a transformative effect on how scientists research protein structures, helping to guide the experiments they perform. What's more, as Ford's Omicron predictions show, the A.I. software could be invaluable for informing policy and healthcare decisions in the weeks—or sometimes months or years—before experimental data is available.


FORTUNE ON A.I.


Tesla fans start to doubt Elon Musk after latest price hike for Full Self-Driving tech—by Christiaan Hetzner


Sanofi agrees to partnership with A.I.-based drug discovery company Exscientia worth up to $5.2 billion—by Jeremy Kahn


France cracks down on dark patterns, fining Google and Facebook nearly $240 million for manipulative cookie tricks—by David Meyer


Commentary: A.I. could make your company more productive—but not if it makes your people less happy—by Francois Candelon, Su Min Ha, and Colleen McDonald



.

BRAIN FOOD


A surprising victory for "symbolic A.I." and what it may say about the future development of the field. At the big academic A.I. research conference NeurIPS in December, scientists announced the results of a competition, sponsored in part by Facebook parent Meta, to find the A.I. software that could do best on a classic video game called NetHack. Originally released in 1987, NetHack is a dungeon exploration and treasure-hunting game that is notoriously difficult for both humans and computers to master. The big news is that the A.I. system that won the competition didn't use a many-layered neural network. Neural networks, which are loosely based on the human brain, have been responsible for almost all of the stunning advances in A.I. over the past decade. The winning software also did not use reinforcement learning, which is when an A.I. system learns from experience how to maximize a reward. This is the kind of learning technique that has allowed A.I. software to conquer game after game, from Go to poker to Dota2. Instead, it used an older method of A.I., called "symbolic A.I.," in which an agent is armed with some hard-coded strategies derived from human knowledge about how to operate. (This news broke too late to make my NeurIPS 2021 roundup.)


The results created quite a stir among the A.I. research community. Ed Grefenstette, a A.I. researcher at Meta, tweeted: "This surprising result should serve as a moment of reckoning for [reinforcement learning] research. Reward may be enough in theory (if only) but an astounding amount of domain knowledge can, and probably must, be exploited in order to tractably solve complex problems."


This drew a quick reply from Grefenstette's Meta A.I. research colleague Yann LeCun, one of the pioneers of deep learning, who tweeted, "Clearly, reward is not enough, and domain knowledge is necessary. The important question is whether domain knowledge can be learned. My answer is a clear yes. And my vote is on some form of non-task-specific self-supervised learning to learn world models."


.
Email Us
Subscribe
share: Share on Twitter Share on Facebook Share on Linkedin
.
This message has been sent to you because you are currently subscribed to Eye on A.I..
Unsubscribe

Please read our Privacy Policy, or copy and paste this link into your browser:
https://fortune.com/privacy/

FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.

For Further Communication, Please Contact:
Fortune Customer Service
40 Fulton Street
New York, NY 10038


Advertising Info | Subscribe to Fortune