Despite Twitter’s ongoing promises that it’s working to make its social network a safer place, the company punished just under 6% of the accounts users flagged for abusive or hateful content, or violent threats during the second half of 2018.
But regardless of the rising number of reports, Twitter took action on less accounts. In the second half of 2018, Twitter took action against 5.6% of the flagged accounts or, in hard numbers, more than 612,000 accounts. That’s down more than 27,000 from the nearly 640,000 accounts punished in the previous six months.
In a request for comment about the matter, a Twitter spokesperson said, “Not all reported accounts break our rules.”
But the pressure is on. The public is increasingly pushing for Twitter to improve its methods for finding and removing harmful content, bullying, and threats on its service. Twitter CEO Jack Dorsey is well aware of the problems, repeatedly calling safety a top priority.
Twitter says it takes several different actions against policy violators including limiting a user’s tweet visibility, freezing the user’s account until a tweet is removed, and hiding a tweet. The company can also permanently suspend an account.
But Twitter has struggled to keep up with the volume of bad content posted on its network. Violence, sexual harassment, and conspiracy theories have run rampant on Twitter over the past few years.
The company has also been accused of cherrypicking the accounts it punishes, allowing some big names to continue violating its content policies. For example, Twitter let Infowars’ publisher Alex Jones tweet conspiracy theories for years before banning him in late 2018.
And reports of rule-breaking tweets continue to rise, according to the transparency report. Abuse, hateful conduct, and sensitive media, which include things like graphic violence and pornographic content, were the top three reported violations throughout all of 2018. Twitter also tracks reports of child sexual exploitation, the release of private information, and violent threats.
Reports of abuse, private information, hateful conduct, and violent threats all rose during the second half of the year. The number of accounts flagged for child sexual exploitation and sensitive information declined.
Moving forward, Twitter said it’s working to proactively identify harmful content instead of relying on user reports, according to an update published last month. Whether or not the new approach works will likely be reflected in the numbers in the next transparency report—which is expected to be released later this year.