Inside Facebook’s Biggest Artificial Intelligence Project Ever

Inside The F8 Facebook Developers Conference
Michael Short — Bloomberg via Getty Images

The next time you load Facebook, whether its website or one of its mobile applications, consider the computing muscle it takes to load the personal updates, news stories, and family photos you see on your screen.

Now multiply that by one billion users. Now do it every single day.

To run Facebook FB isn’t just to operate the social network of a company that landed at No. 242 on last year’s Fortune 500. It’s also to run the racks and racks of computing infrastructure necessary to serve it up—the processors and memory and software necessary to know what you want to see, where you want to see it, when you want to see it.

Facebook serves one-fifth of the world’s population and more than half of the roughly 3.2 billion people estimated to be Internet users at the end of last year. So it’s not an unreasonable question to ask: Are there enough humans on the planet to power such an enormous network?

The answer is no, at least not affordably. Which is why Facebook turned to artificial intelligence.

Five years ago, when Facebook introduced the Open Compute hardware initiative, it did so because the cost of serving up your News Feed, even for a website that at the time had roughly 740 million users, was quite literally the cost of goods sold. Building its own computing infrastructure to serve up those posts faster and more cheaply had a decided effect on the bottom line.

Facebook says it has saved more than $2 billion from its investments in Open Compute. But five years is an eternity on the Internet, and now every big tech company is out to conquer a different problem. Serving up content cheaply can be done, but figuring out what kind of content to serve among billions of posts is still a challenge. So, just as Facebook set out to rebuild the hardware industry half a decade ago with the Open Compute project, it has more recently created an internal platform to harness artificial intelligence so it can deliver exactly the content you want to see. And it wants to build this “machine learning” platform to scale. (“Machine learning” is a form of artificial intelligence that allows computers to learn how to operate without being pre-programmed.)

“We’re trying to build more than 1.5 billion AI agents—one for every person who uses Facebook or any of its products,” says Joaquin Candela, the head of the newly created Applied Machine Learning group. “So how the hell do you do that?”

You look at previous successes for inspiration. Facebook’s infrastructure team was the inspiration for its newer machine learning group, Candela says.

“We tend to take things like storage, networking, and compute for granted,” he says. “When the video teams builds live videos, people don’t realize the magnitude of this thing. It’s insane. The infrastructure team is just there, shipping magic—making the impossible possible. We need to do the same with AI. We need to make AI be so completely part of our engineering fabric that you take it for granted.”

Facebook formed the Applied Machine Learning team last September. The group runs a company-wide internal platform for machine learning called FBLearner Flow. The platform is the artificial-intelligence equivalent of the Open Compute project with one key difference: It’s not something that will be offered to the world through open source hardware. Without the data that Facebook has on tap, the company says, its platform is essentially useless.

FBLearner Flow combines several machine-learning models to process several billion data points, drawn from the activity of the site’s 1.5 billion users, and forms predictions about thousands of things: which user is in a photograph, which message is likely to be spam. The algorithms created from FBLearner Flow’s models help define what content appears in your News Feed and what advertisements you see.

It would be easy to jump to the conclusion that Facebook’s use of artificial intelligence will help eliminate some of the company’s 13,000 employees. The reality couldn’t be more different, says chief technology officer Mike Schroepfer. AI is helping Facebook augment the capabilities of its human engineers. “We’re able to do things that we have not able to do before,” he says.

Joaquin Quiñonero Candela is director of Applied Machine Learning at Facebook.

Schroepfer says Facebook regularly sees opportunities that it doesn’t yet have the ability to conquer—at least with humans, anyway. Consider its recently launched feature that provides captions on photos for visually impaired people. It’s neither affordable nor scalable for Facebook to hire people to manually tag the contents of every photo uploaded to the network, nor is it reasonable to expect users to do it themselves. But the information is both helpful to Facebook and to impaired users. Using the computer vision models on the FBLearner Flow platform, a computer can automatically comb through billions of photos uploaded to Facebook and apply tags with a reasonable degree of accuracy.

“It’s just enabling new applications, and particularly to solve problems at scale,” Schroepfer says. Facebook has used this machine-based approach to translate News Feed posts, police inappropriate content on the site before people see it, and in the creation of M, Facebook’s attempt at building a personal assistant using a combination of man and machine.

Facebook isn’t the only large Internet company experimenting with artificial intelligence. Google parent company Alphabet, Amazon, Baidu, and Microsoft are investing heavily in related technologies. As we entrust more of our lives to the digital realm, it has become readily apparent that the humans who build the sites on which we depend (for daily information, social interactions, multimedia archiving, etc.) can’t keep up. Enter the machines.

As of last month, about 750 Facebook engineers and 40 different product teams were using the FBLearner Flow platform. By the end of June, the company hopes that 1,000 engineers will use it. Facebook ultimately aims to build machine learning tools that are easy enough to use for non-engineers, though that’s a far-off goal.

Machine learning has quickly become the hottest form of, and is a building block of, artificial intelligence, that decades-old fixture of science-fiction movies that experienced a recent resurgence as new computing technologies emerged. As computing systems grow larger and more complex, it has become apparent that it is unsustainable to hard-code rules for how a computer should interpret data. It’s much easier to follow the biblical injunction to teach a man to fish—or in this case, a teach a computer how to interpret its own data.

Computer scientists use a variety of tools to teach computers to learn. Most of today’s efforts are focused on “supervised learning,” where researchers build a machine learning algorithm based on existing data sets that they use to train the computer. For example, to teach a computer how to recognize faces, you’d train it on databases of different faces so the computer learns how to tell people apart. The holy grail of machine learning is “unsupervised learning,” where the computer only gets classes of data and builds its own model to categorize it. In other words, instead of giving the computer an image of a face to work from, the computer gets images and it must cluster like data together to derive that the image in question is of a human face.

Unsupervised learning is how humans learn, and Facebook has been very vocal about its efforts to teach computers “common sense.” Much of those efforts though are handled in the Facebook Artificial Intelligence Research group. FAIR was created at the end of 2013 as a place for basic research. It exists separately from the Applied Machine Learning team, though some of the research it conducts finds its way into FBLearner Flow.

Mike Schroepfer, Facebook's CTO. Mike Schroepfer, Facebook’s CTO.

As Candela explains it, you can think of the AML team as the commercialization arm of FAIR. It’s how deep science gets filtered into a product that serves more than a billion users. Except when it doesn’t. Not all of FAIR’s research will make it into a product, Schroepfer says, though he cautions that both FAIR (which employs about 50 researchers) and AML (which employs about 100) have paid for themselves.

For example, Facebook now uses machine learning to translate two billion News Feed items per day and has ended its reliance on Microsoft’s Bing Translate service in favor of its own translation models. Facebook also has used the AML team’s platform to apply computer vision models to satellite images to create population density maps and ultimately determine where it needs to deliver broadband in the developing world. And its video-captioning efforts have proven to increase engagement, as measured in shares or likes, by 15% and boost viewing time by 40%.

On Wednesday during its F8 developer conference, Facebook plans to show off the use of the platform in a demonstration that uses the platform’s computer vision skills to let users search for photos based on the content of those photos. For example, typing the word “pizza” in the photo search tool would bring up all of the images that the computer vision algorithm accordingly tagged.

These are some of the recent victories for Facebook’s AML team, but the company has been building machine learning algorithms for a decade: It first attempted to use machine learning technology on the News Feed in 2006.

“News Feed was the first time we tried to do the hard work for you,” Schroepfer says. That attempt was “rudimentary,” according to Schroepfer, but even then, Facebook couldn’t hire enough editors to populate the News Feeds of the millions of users it served.

The company’s use of machine learning has since grown more advanced. But it was only with the launch of its photo-sharing service Moments last June that Facebook really began openly talking about how its deep research into machine learning was influencing new products. Moments used Facebook’s image recognition models to let users create private photo albums with a select group, such as the people in a photo.

At the product’s launch, Facebook said that its image recognition models could recognize human faces with 98% accuracy, even if they weren’t directly facing the camera. It also said it could identify a person in one picture out of 800 million in less than five seconds.

People freaked out. What Facebook intended as a way to easily share photos in a semi-private manner rubbed many users the wrong way. It forced them to face the unsettling fact that Facebook could identify them out of more than a billion users and do it at a freakishly fast pace. Facebook couldn’t even launch the feature in Europe because of regulations related to privacy and facial recognition technology.

The privacy concerns show the dark underbelly to Facebook’s altruism with regard to machine learning technology. The data-driven capabilities allow Facebook to make its product easier to use. But they also allow the company to keep people using its platform, which in turn allows it to sell more, and more effective, advertisements against them.

To do this Facebook run tens of trillions queries per day to make about six million predictions per second. Facebook trains the algorithms that power its News Feed within hours, using trillions of data points. The company updates its learning models every 15 minutes to two hours so that it can react quickly to current events.

When a computer can parse that much information and make judgements, it’s a disconcerting reminder that every single aspect of our digital lives is being atomized, sliced, and diced in ways that show advertisers, researchers, and even governments a picture of our private thoughts and actions. Just as troubling: The notion that machine learning algorithms may not get things right.

And none of this accounts for the fact that many people don’t even know that machine learning methods are altering their experience of a product. The reason a person may not see a post in his or her News Feed may be because an algorithm filtered it out. In 2014, an MIT study discovered that 62.5% of participants in the study were not aware that Facebook filtered their News Feed.

“The best AI algorithms can generalize, and they can predict what you want, but they are never perfect,” Candela says. It’s one reason why Schroepfer believes that Facebook remains far from turning everything over to artificial intelligence technologies.

“I think you still have people in the decision loop,” Schroepfer says. “We are building things for other people, and it’s hard for me to believe that, even with our advanced technology, machines can figure out what other people want.”

Schroepfer says all of this work is meant to build a social network that can better anticipate what a user wants to see or experience. If you have a bad day, he wants Facebook to show you humorous cat videos. If you haven’t talked to your mother in a week, he wants Facebook to recognize that and actively serve you an update about her life.

“The problem with Facebook right now is, you’re not telling us enough about what you want,” Schroepfer says. “We’re trying to guess at it. Part of the problem is we don’t know what to ask, and we’re not sure what to do with it when you tell us. Because our systems aren’t yet really set up to optimize for that.”

The Applied Machine Learning team is a chance to establish those systems. FAIR, meanwhile, is an opportunity to build a better understanding of how to make computers learn.

Get Data Sheet, Fortune’s technology newsletter.

Facebook’s decision to break out its artificial intelligence research in this way is somewhat unusual among its peers.

For example, Microsoft MSFT , which is home to a large artificial intelligence research group that is part of its Microsoft Research unit, does not turn over its efforts to a commercialization team that then turns it into a product for internal consumption. Instead, a researcher might work directly with a person on the product team to build a tool or new service using machine learning.

Externally, Microsoft is trying to build a platform of services for machine learning and offer them to customers through Azure, its cloud computing platform, says Peter Lee, the head of Microsoft Research.

Still, Lee is in agreement with Facebook’s Schroepfer that machine learning and AI are enabling companies to build new products that were just too time-consuming or resource-heavy in years past.

Candela, who came to Facebook from Microsoft, says he intentionally tried to create a different structure within Facebook because he felt that good ideas couldn’t move quickly across the organization when he was at Microsoft. Each innovation or new artificial intelligence algorithm was locked into one team. Facebook is trying to resist that, he says.

But Andrew Moore, dean of computer science at Carnegie Mellon University, is skeptical that an artificial intelligence platform like FBLearner Flow can be used broadly across an organization. Most machine learning models can’t be generalized, he says.

“For machine learning, there’s one trap, and I don’t think I know of large company that hasn’t fallen into this trap,” he says. “It seems like a very useful thing to build a platform to support [machine learning algorithms] but you discover that each application that uses machine learning needs a different application to use it. So there is sometimes a disconnect between the the creators of a machine learning platform and those trying to build a product.”

For now, Facebook is happy with its efforts to date, and they seem to be paying off in its new products. There are still plenty of things the company must get right as it hands more decisions to algorithms, but the overarching project has changed the way the company measures its success.

For example, Facebook’s launch of a “real names policy,” which required people to use their actual names on the site, upset transgender users (who may not identify with their given name), users of Native American descent (whose names don’t neatly fit into Western formats), and victims of abuse (who sought additional privacy). But Facebook’s algorithms couldn’t easily parse these names.

Today, Facebook segments its data differently to ensure that smaller populations don’t get lost in the averaging process, Schroepfer says. It also conducts qualitative reviews of new products with focus groups and direct user feedback. All of this has helped prevent “rocky” product launches, he says. “Now it’s pretty rare for us to launch something where we didn’t understand how [the change] was better for people.”

It’s an early step on what amounts to be a very long road. Artificial intelligence technologies are inarguably making computer processing more efficient and allowing us to build systems at a scale never before seen. They are helping Facebook expand the reach and capabilities of its social network without eroding the profits it generates. With a little luck, they’ll help us better learn how to live with machines, too.

Sign In

Get

Thank you for your interest in licensing Fortune content. Please find information on various licensing contacts below and choose the one that best suits your needs:

  • 1. To license Fortune articles, excerpts, or headlines for republication in various media (including books, eBooks, film, web, newsletters, newspapers, magazines and others), please email syndication@timeinc.com.
  • 2. To license a Fortune cover, order reprint or e-print copies of an article or cover, or license an accolade, please contact PARS International at www.timeincreprints.com.
  • 3. To license text only photocopies of Fortunearticles as print or digital handouts in academic settings, or in academic coursepacks, please contact the Copyright Clearance Center at www.copyright.com