LinkedIn has released new data that shows how often—or rather how rarely—it has removed harmful content, fake accounts, and abuse from its service.
The biggest problem on the social media site for professional networking is spammy content and fake accounts, according to the report. Within the first six months of the year, the company removed more than 60.4 million posts that it deemed as a spam or scam, along with more than 21 million fake accounts. LinkedIn automatically identified 99.5% and 98% of those posts, respectively.
But when it comes to harassment, adult content, violent or graphic posts, hate speech, and child exploitation, LinkedIn has intervened far less. The company reported of all those categories combined, it only removed 32,000 posts. Harassment was the leading violation, accounting for more than 16,600 posts, followed by adult content, which accounted for more than 11,000. The service only removed 22 posts related to child exploitation.
For comparison, Facebook removed 4 billion posts related to spam alone, over the first six months of 2019. Meanwhile, Twitter took action on more than 584,000 accounts for hateful conduct alone.
“It would make sense that our numbers are different from some other platforms,” says Rob Hallman, LinkedIn’s vice president of legal, emphasizing the professional nature of LinkedIn. “We hope it continues to be a platform for getting and giving [professional] advice.”
This is the first time LinkedIn, which is owned by Microsoft, has provided information on the number of violations of its community standards. It comes as social media companies grapple with policing their sites, which have become rife with hate speech, pornographic images, and spam.
LinkedIn uses a combination of artificial intelligence and human reviewers to police its content. But users also play a big role in reporting the harmful content, LinkedIn says. Given that people come to the site to make connections that could benefit their careers, they’re far more likely to spot worrisome content or fake profiles and report them, according to the company.
Even so, the company says it still has more work to do to.
“Any number greater than zero is too high,” says Madhu Gupta, LinkedIn’s director of product management, trust, and security. “When you’re in a professional context with members focused on looking for a job, the sensitivity of small numbers is really important.”
LinkedIn’s data on community violations doesn’t provide detail as to how many total reports were made across categories or how many instances, beyond spam and fake accounts, were caught proactively by its artificial intelligence. It’s also unclear if the low reports of violations are solely because the service provides a less conducive environment for problematic content or whether it has anything to do with the effectiveness of how it polices content.
Either way, LinkedIn says it’s listening to the community and trying to provide more insight to the public about what’s happening on its service. It’s also working on developing tools to make it easier for people to report bad behavior.
There is some bad stuff that happens and we want to be more transparent,” says Blake Lawit, LinkedIn senior vice president and legal counsel. “Members can look at this and evaluate us. If we’re going to remain trustworthy this is just a part of it.”
More must-read stories from Fortune:
—Why the Midwest is a hotbed for innovation
—Nintendo’s Switch Lite helps capture new audiences—women and families
—A new Motorola Razr—and its folding screen—could bring phone design back to the future
—Most executives fear their companies will fail if they don’t adopt A.I.
—How giving thinkers and tinkerers room to experiment builds a better company
Catch up with Data Sheet, Fortune’s daily digest on the business of tech.