LinkedIn saw a massive influx in user posts and violations this year

December 24, 2020, 1:00 AM UTC

LinkedIn faced an unprecedented challenge this year as users increased the number of posts they made by 50%, representing a record rise in content on the service. But the influx also led to more problematic posts, prompting LinkedIn to tighten its rules and expand its content moderation team.

But as events like the coronavirus pandemic, Black Lives Matter protests, and the 2020 U.S. presidential election led to increasing tensions both online and off, more posts on LinkedIn strayed from professional conversations to conspiracy theories, misinformation, and hate speech.

“We really needed to standardize and make clear what it meant to be constructive and respectful on LinkedIn,” said Liz Li, LinkedIn’s director of product management. 

This year, LinkedIn made a slew of policy changes, including prohibiting coronavirus-related misinformation in spring (the policy also extends to misleading information about the coronavirus vaccine). Following a rise in posts related to QAnon, a conspiracy theory tied to the far right, the service began cracking down in summer. It removed QAnon posts that contained misinformation and disabled popular hashtags related to it. Then, in fall, LinkedIn clarified a number of policies adding verbiage like “unwanted advances” to its sexual harassment policy, forbidding the use of racial and religious slurs, and banning excessively gruesome or shocking content.

The actions taken by Microsoft-owned LinkedIn come as all social media companies grapple with a rise in divisive, hateful, and misleading posts. Twitter and Facebook also have been rapidly changing their policies, labeling or removing everything from white nationalism to Holocaust denial and false claims of victory during the U.S. presidential election. For the most part, LinkedIn previously avoided many of the tougher challenges related to content moderation given that members tended to use the service only for professional networking and job hunting. But this year, that began to change.

“We started to see the content and the conversations on LinkedIn really sort of transform,” Li said. “A ton of it is great—it’s professional, it’s respectful. But at the same time, we’ve also seen an increase in members reporting that there’s stuff that they either don’t want to see or even stuff that would violate our policies.”

LinkedIn said it made a “significant investment” to expand the number of content moderators it employs, though it wouldn’t specify how many of its 16,000 employees review posts for the service. For comparison, Facebook employs more than 15,000 content moderators around the world. It also has been working to strengthen its technology to proactively detect and remove problematic content before anyone sees it. This year, LinkedIn also started asking members to specify what content they do and don’t want to see on their feeds and provide reasons why. 

As a result, LinkedIn is removing more harmful content than it ever has before. For example, in the months from March to August, LinkedIn said it removed more than 20,000 pieces of content for being hateful, harassing, inflammatory, or extremely violent. For reference, the service removed about 38,000 posts for the same violations over the entirety of last year. 

Li said LinkedIn employees who have been working on the issue have come together and collaborated more than ever this year, which was especially surprising given that workers are still working from home. She said following this year, the teams are “stronger operationally” and that overall, there’s a bigger effort aimed at tackling content moderation issues.

“A lot more people’s attention and bandwidth is focused on this area,” Li said.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward