Meet the A.I. that helped Facebook remove billions of fake accounts

Facebook has lifted the curtain on a key technology that has enabled it to address one of its toughest challenges: eliminating fake accounts used for everything from spam ad campaigns to the spread false information.

The Internet media giant revealed details on Wednesday of how it designed an artificial intelligence system and trained it to be accurate enough to automatically detect accounts that violate its policies.

Policing its vast social network has become an increasingly existential problem for the company as faces the growing threat of regulation worldwide. The public and lawmakers have been dismayed by the role the social network has played in everything from Russian interference in the 2016 U.S. Presidential election to Myanmar’s genocide against the Rohingya Muslim population. Government officials and users have also become alarmed about hate speech, bullying, phishing, and financial fraud perpetrated on the platform.

Five years ago, Facebook relied largely on users to flag offending accounts to human reviewers. But the volume of problematic accounts Facebook has to deal with is massive: in the third quarter of 2019, the last period for which the company has released numbers, Facebook blocked some 1.7 billion offending accounts. And that doesn’t even include accounts the company prevents from ever being created in the first place, said Bochra Gharbaoui, a data science manager on Facebook’s Community Integrity team. At any time, Facebook estimates that 5% of its active accounts are fraudulent.

Relying on human reviewers has created other problems too. Facebook has used contract workers to review suspect content and behavior, but these workers are often low-paid and suffer mental health issues due to their constant exposure to disturbing posts, images, and videos.

Mark Zuckerberg, Facebook’s founder and chief executive, told U.S. lawmakers in 2018 that A.I. would help the company deal with the flood of problematic content. But it is only recently that the company’s researchers and engineers have started to make progress on fulfilling Zuckerberg’s pledge.

Thanks to A.I.-enabled tools, in the third quarter of 2019, Facebook took action against 99.7% of the fake accounts it blocked before other users flagged them to a human review team, the company said.

Facebook has a difficult needle to thread when it blocks accounts: it wants to catch and stop all the policy violations, including every fake account, without inadvertently blocking legitimate users. But if its criteria for detecting violations and taking action is too loose, other users will be victimized and the company itself could find itself at the center of another public relations debacle.

Both false positives and false negatives need to be minimized, Gharbaoui said. “This is a very hard tradeoff,” she said.

The problem is also difficult because scam artists, fraudsters and, yes, some governments, are always trying to figure out ways around Facebook’s defenses, explained Brad Shuttleworth, a Facebook product manager for community integrity.

The machine learning technique Facebook created, which it calls “deep entity classification,” or DEC for short, could be adapted by other companies that need to moderate conversations and content, such as rival social networks, messaging apps or video game companies, said Daniel Bernhardt, engineering manager in Facebook’s Community Integrity group in London, who worked on the system. The company is publishing the general architecture of DEC and details about how it was trained, but it is not making the trained model itself available to other companies.

DEC relies on several clever bits of thinking and engineering. The first was Facebook’ recognition that trying to train an algorithm by having it review standard account features—such as the IP address used to create the account, the age of the account, the number of likes a page has, or how many other users the account was connected to—would result in a screening model that was either too easy for someone with malicious intentions to game, or that would produce too many false positives.

Facebook’s solution was to look at each account, not in isolation, but in the context of all the other accounts and pages it was linked to, extended out to two degrees of separation. And then, instead of using direct features of that individual account, such as likes or friends, it fed the system aggregate metrics, such as the median number of Facebook friends across all those first and second order connections. (These metrics, by themselves, don’t indicate whether an account is legitimate. They are simply a way to vastly increase the number of metrics the model is analyzing so it can build a much more detailed statistical picture of the account.) This data, which Facebook calls “deep features,” is inherently more difficult for a malicious actor to tweak and result in far fewer numbers of false-positives or false-negatives.

Despite its vast size and the thousands of humans reviewers it employs to screen its content, Facebook said it is prohibitively time-consuming and expensive to create a high-quality, human-labelled dataset large enough to train a machine-learning algorithm to detect each type of abuse (such as fake accounts, spammers, financial scams or compromised accounts) with the kind of 99%-plus accuracy that Facebook needs.

So Facebook’s second clever bit of engineering was to figure out how to take a small, high-quality human-labelled dataset, which would normally be too small to train a highly-accurate deep learning algorithm, and enhance it by also using a much larger, computer-labelled, but less accurate, dataset. It does this by dividing the system into two separate modules.

In the first module, Facebook takes the set of deep features for each account and runs them through a multi-layer neural network, a kind of machine learning software loosely based on the human brain. In this case, the algorithm must learn what pattern of deep features correlates with what kind of account: is it a normal account or spam account or phishing account, etc.? And it learns to do this by referring to a large set of training samples, consisting of 5 million examples of fake accounts, that have themselves been rather crudely labelled by separate pieces of existing software.

Facebook then takes that statistical pattern for each account type and feeds it into the second module, where a different kind of machine-learning algorithm, called a gradient-boosted decision-tree, scores each account for the same categories —spam, fake account, phishing, bullying, etc.—but based on a much smaller set of high-quality, human-labelled training data. (In the case of fake accounts, about 100,000 human-labelled examples.) The results of this scoring then determine whether and what action Facebook will take against the account.

This results in a system that is more than 97% accurate in classifying accounts, far better than other methods could achieve.

The system is not designed to spot political disinformation campaigns, Shuttleworth said. Instead, Facebook has a separate “information operations” team working to combat that problem—including, in some cases, the use of differently-constructed machine learning algorithms.

Facebook is not the only company working with artificial intelligence that has found benefits from splitting a problem into two separate modules that feed one another. DeepMind, the A.I. research company owned by Google-parent Alphabet, used a similar two-step approach when it developed a system to spot over 50 sight-threatening eye conditions from eye scans. One module, which does computer vision, identifies features in the scans, while the second module makes a diagnosis based on these features. The system has the added advantage of being far more interpretable than a single black box module.

More must-read stories from Fortune:

—How 5G promises to revolutionize farming
—Did the ‘techlash’ kill Alphabet’s city of the future?
—College backlash against facial recognition technology grows
In A.I., what would Jesus do?
Coronavirus is giving China cover to expand its surveillance. What happens next?

Catch up with Data Sheet, Fortune’s daily digest on the business of tech.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward