Facebook makes strides using A.I. to automatically find hate speech and COVID-19 misinformation

Facebook says it’s getting better at automatically detecting and removing hate speech and misinformation about the coronavirus pandemic.

It has pioneered a number of artificial intelligence techniques to help it police content across its social networks, Facebook said Tuesday in a series of blog posts.

The details about the technology Facebook is using came on the same day the company released its latest quarterly update on its efforts to combat hate speech, child pornography, fake accounts, political misinformation, terrorist propaganda, and other violations of its community standards. The report showed the company was combating a big surge in hate speech and COVID-19 related misinformation since the start of the year.

Among the new A.I. systems Facebook highlighted on Tuesday are systems that better understand the meaning of language and the context in which it is used, as well as nascent systems that combine image and language processing in order to detect harmful memes.

As well as helping to combat misinformation related to COVID-19, Facebook has also turned to new A.I. algorithms to police its new policy banning ads selling face masks, hand sanitizer, and other items that seek to exploit the pandemic for profit.

The company put warning labels on 50 million posts in April for possible misinformation around COVID-19, the company said in a blog. It also said that since the beginning of March it has removed 2.5 million pieces of content that violated rules about selling personal protective equipment or coronavirus test kits.

Facebook said that thanks to the new techniques, 88.8% of the hate speech the social network took down in the past quarter was detected automatically before someone saw and flagged the offensive material for review by the company’s human reviewers. This is up from about 80% in the previous quarter.

But the company said that the total amount of hate speech it’s finding continues to rise—9.6 million pieces of content were removed in the first three months of 2020, 3.9 million more than in the previous three months.

Mike Schroepfer, Facebook’s chief technology officer, said the increase was due to the company getting better at finding hateful content, not a surge in hate speech itself. “I think this is clearly attributable to technological advances,” he said on a call with reporters ahead of the release of the report.

In particular, Facebook has built on advances in very large language learning algorithms that have only been developed in the past three years. These models work by building a statistical picture of how the words in posted content relate to the other words that come both before and after it. Facebook has developed a system called XLM-R, trained on two terrabytes of data, or about the equivalent of all the words in half a million 300-page books. It learns the statistical map of all of those words across multiple languages at once. The idea is that conceptual commonalities between hate speech in any language will mean the statistical maps of hate speech will look similar across every language even if the words themselves are completely different.

Facebook is at pains to show it is making good at CEO Mark Zuckerberg’s repeated promises that machine learning and A.I. will enable the company to combat the spread of hate speech, terrorist propaganda, and political misinformation across its platforms—problems that have put Facebook in the crosshairs of regulators globally and turned many one-time fans against the company in the past four years.

“We are not naive,” Schroepfer said. “A.I. is not the solution to every single problem and we believe that humans will be in the loop for the foreseeable future.”

Much of the tech Facebook highlighted is designed to make the job of its human content moderators and associated fact-checking organizations easier and less repetitive.

That is especially important at a time when social distancing measures instituted by the company as well as by various countries have meant that the centers where many of its human content moderators work have had to close, and the reviewers, many of whom are contractors, have been sent home. In some cases, Schroepfer said, the company has found ways for these people to continue their work from home, although that has not been possible in all cases.

“We want people making the final decisions, especially when the situation is nuanced,” Schroepfer said. “But we want to give people we work with every day power tools.” For instance, he said, if a human reviewer decided that a whole class of images constituted misinformation, Facebook should be able to automatically apply that label to similar content across both Facebook and Facebook-owned Instagram without the human reviewers having to find and manually remove all of it.

One way people try to evade Facebook’s content blacklists is by making small modifications to blocked content—altering some pixels in an image or using a photo filter, for instance—and then trying to upload it again and hope it sneaks past Facebook’s algorithms. To battle these tactics, the company has developed a new A.I. system, called SimSearchNet, trained to find pieces of nearly identical content.

Another computer vision system the company has deployed to enforce its new COVID-19 ad policy works by identifying the objects present in an image, not simply forming a statistical map of all of the pixels it contains. This way the algorithm should be able to determine that the image has a face mask in it, even if that face mask is rotated at a funny angle or shown against a background designed to make it harder for machine learning software to recognize it, Schroepfer said.

Finally, the company said it was also working on “multimodal” machine learning systems—ones that can simultaneously analyze text and imagery, and in the future, possibly video and sound too—to combat the spread of hateful memes.

To that end, the company has created a new dataset consisting of 10,000 memes that were determined to be part of hate speech campaigns and it is making it freely available for researchers to use to build A.I. systems capable of successfully detecting them. The company is creating a competition with a $100,000 prize pool to find the best hateful meme detection software, with the condition that in order to enter the contest, researchers must commit to open-sourcing their algorithms.

As a benchmark, Facebook’s A.I. researchers created several systems of their own and trained them on this dataset. But the company’s results so far indicate how difficult the challenge is: Facebook’s best hateful meme detector, which was pre-trained on very large dataset of both text and images simultaneously, was only 63% accurate. Human reviewers, by contrast, were about 85% accurate and missed less than 20% of the memes it should have caught.

More must-read tech coverage from Fortune:

—Remote work, online grocery shopping, cord cutting: What coronavirus trends will stick
—No buses, no problem. Some cities provide subsidized Uber rides amid pandemic
How to choose a video chat app? Grade it on privacy
—Listen to Leadership Next, a Fortune podcast examining the evolving role of CEO
—WATCH: Zoom’s ups and downs since the coronavirus crisis

Catch up with Data Sheet, Fortune’s daily digest on the business of tech.





Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward