Facebook wants better A.I. tools. But superintelligent systems? Not so much.

January 20, 2020, 9:30 AM UTC
Picture of Yann LeCun, Facebook's Chief A.I. Scientist
Yann Lecun, head of artificial intelligence (AI) research at Facebook Inc., gestures while speaking during a Bloomberg Television interview at Bloomberg's Sooner Than You Think technology conference in Paris, France, on Wednesday, May 23, 2018. Paris is playing host this week to a global gathering of tech executives and entrepreneurs at the Bloomberg conference and at Viva Tech, a three-year-old event for startups, as the French establishment unites behind a push for more tech investment in the city.
Marlene Awaad—Bloomberg/Getty Images

This article is part of a Fortune Special Report on Artificial Intelligence.

In early January, Facebook released software that turns speech into text more accurately than previous systems and does it in real time, opening up the possibility of better captioning of live video.

The system, which uses a different kind of A.I. software design than had previously been tried for automatic speech recognition, is a good example of the sort of advances that Facebook’s A.I. research lab regularly churns out: ones that both push forward the state-of-the-art but also have clear implications for Facebook’s business.

Live captioning could be a useful feature for Facebook and Instagram posts. More importantly, it can help Facebook police that content for hate speech, bullying, and disinformation, which the social network is under increasingly intense pressure to prove it can do well.

It seems like a no-brainer that this kind of research would be beneficial to Facebook. So it’s surprising to hear Mike Schroepfer, Facebook’s chief technology officer, tell me the social network was initially reluctant to create an A.I. research lab.

For a long time, the company eschewed the idea of research not tied directly to a product, Schroepfer says. “It was a big change for the company,” he tells me of the company’s decision in 2013 to create Facebook AI Research (FAIR).

Yann LeCun, a pioneer in the kind of artificial intelligence known as deep learning whom Zuckerberg and Schroepfer recruited to establish FAIR, set the lab up with the explicit goal of creating human-like intelligence.

But Jerome Pesenti, who currently heads both Facebook’s research and applied A.I. efforts, hates the term “artificial general intelligence,” or AGI. That’s the industry terminology for kind of human-like—or even superhuman—intelligence. AGI is the explicit goal of many other advanced A.I. research organizations, such as OpenAI, which last year partnered with Microsoft, and DeepMind, which is owned by Google parent company Alphabet.

“I don’t believe in AGI,” Pesenti says. “I think it is a bad term.”

He says it is wrong to think of human intelligence as single, general-purpose system and he dislikes the way AGI has been caught up in debates about concepts like the Singularity—a kind of New Age concept about the significance of the moment that machine intelligence surpasses that of humans. Instead, Pesenti says, he prefers to talk about learning goals, such as software that can transfer skills from one task to another or can learn from less data.

Even though he pushed for FAIR’s creation, Schroepfer says, he still evaluates the research lab on the extent to which it impacts Facebook’s products—it is just that he is more patient than he would be with a product team. FAIR can operate on a longer timescale. “It is clear that they have delivered a whole bunch of things that are in production,” he says of FAIR. “So it is fairly easy to justify the impact they’ve had on the company to date.”

Pesenti points to four technologies in particular that FAIR has developed that have made a big difference to Facebook commercially: Pytorch, a popular deep learning programming language which Facebook invented and then open-sourced and which it uses to code most of its own machine learning applications; a computer vision system that allows for easy detection and classification of objects in images; automatic language translation; and RoBERTa, another language algorithm which allows Facebook to perform automatic content moderation for hate speech and bullying. RoBERTa has opened the possibility of using automatic moderation even for languages, such as Burmese, in which large amounts of digital content are not available to train a language-specific system.

It helps that as FAIR was getting established, Facebook began encountering an increasingly existential series of crises around content moderation—from hate speech to cyberbullying to political disinformation. Because Facebook’s social network is so large—with more than two billion users—the only economically-feasible way to tackle the problem is through machine learning, as Zuckerberg told Congress in 2018.

The problem was that reliable A.I. techniques to automatically screen content didn’t exist when Facebook first began trying in earnest to address these problems in the wake of the 2016 U.S. presidential election. It has been up to FAIR to help figure out those methods. “It has created urgency and it has created a much clearer path to impact for certain kinds of technologies,” says Schroepfer.

More from Fortune’s special report on A.I.:

—Inside big tech’s quest for human-level A.I.
—A.I. breakthroughs in natural-language processing are big for business
A.I. in China: TikTok is just the beginning
—A.I. is transforming HR departments. Is that a good thing?
—Medicine by machine: Is A.I. the cure for the world’s ailing drug industry?
Subscribe to Eye on A.I., Fortune’s newsletter covering artificial intelligence and business.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward