Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

Facebook Removed Nearly 750,000 Instagram Posts Showing Child Exploitation

November 13, 2019, 9:32 PM UTC

Facebook removed nearly 750,000 posts on Instagram containing images of nude and sexually exploited children during the third quarter, highlighting how the online photo service has become a popular destination for pedophiles.

The company, which released the data on Wednesday, said the number of such posts removed from Instagram was up 50% from the 512,000 in the previous quarter. It explained the sharp rise by partially blaming a technology glitch earlier this year that affected its ability to detect new posts containing content already deemed harmful.

“It’s worth remembering that this is a tiny fraction of the content on Facebook and Instagram, and we remove much of it before anyone ever sees it,” Facebook CEO Mark Zuckerberg said during a conference call with reporters on Wednesday. But “when people are sharing a billion things a day, even a tiny fraction is too much.”

The data came from the Facebook’s latest quarterly report highlighting its efforts to police its services. For the first time, the company provided information about content violations on Instagram, the photo-centric service that has more than a billion users.

Facebook minimized how widely viewed banned content is on its service. The company said that only four people saw the Instagram posts containing images of sexually exploited children for every 10,000 viewed.

In recent years, Facebook has been trying to clean up its service, which is rife with hate speech, pornographic images, and violence. The company said it spends billions of dollars on safety and has more than 35,000 employees, both contract and full-time, working on the problem.

Inappropriate content related to children is still vexing the service. On its core app, Facebook removed 11.6 million posts during the latest quarter that contained child nudity or exploitation—more than any quarter during the past year and up nearly 70% from the previous quarter. 

Still, Facebook said it’s getting better at using technology to proactively find posts that contain inappropriate content related to children. The company is hoping that technology will be a huge help in the future for finding objectionable posts of all kinds, considering the huge volume.

During the third quarter, Facebook identified almost 95% of posts on Instagram showing child exploitation before users reported them, up slightly from the 93% during the previous quarter.

On Facebook, the company said it’s doing a better job of being proactive. It removed 99.5% of those posts without users having to flag them.

Meanwhile, Facebook also said it removed more than 7 million pieces of content between July and September that contained hate speech policy. That’s more than double the 2.9 million it deleted during the same period last year.

Facebook said it has become better at identifying hate speech, now proactively catching hateful posts 80% of the time. A year ago, it was only able to identify 53% of posts without users flagging them.

The company also removed 1.7 billion fake accounts in the third quarter, which often post misinformation, spammy or harmful content. During the first three quarters of this year, the company removed 5.4 billion fake accounts, more than double the amount of total users on its service.

Facebook is still struggling to proactively police bullying and harassment. It said its systems were only able to detect 16.1% of the 3.2 million posts removed in that category, down slightly from 17.9% during the previous quarter.

Additionally, Facebook reported posts removed in a new category—suicide and self-injury. It said it removed 2.5 million posts on Facebook during the third quarter, up from 2 million the previous quarter. Of those, 97% were identified by technology. Meanwhile, Instagram proactively removed almost 80% of 845,000 posts containing content related to suicide and self-injury. 

During the call on Wednesday, Zuckerberg also pressured other companies to provide more transparency about problematic content on their services, even going as far as to say that regulation should require companies to do so. He advised people against assuming that Facebook has a bigger problem than other similar services.

“I don’t think that that’s actually what this says at all,” Zuckerberg said about the latest numbers. “What this says is we’re working harder to identify this and take action on it.”

More must-read stories from Fortune:

Hideo Kojima on Death Stranding and more
—Review: Apple Watch Series 5 is insanely great
—Workers are worried robots will steal their jobs
—Apple tackles California’s housing crisis amid Apple TV Plus questions
—New bank offers 3% interest rate for “good behavior”
Catch up with Data Sheet, Fortune’s daily digest on the business of tech.