After removing millions of malicious apps and questionable accounts, Twitter is doubling down on trolls during live video broadcasts.
Twitter said Friday it will begin a “more aggressive enforcement” of its community guidelines, which do not allow comments promoting “abuse of other people or the disruption of another person’s broadcasting experience.”
Beginning August 10, Twitter says it will “review and suspend” repeatedly abusive accounts.
Since May, users have been able to report comments as ‘Abuse,’ ‘Spam,’ or ‘Other Reason.’ As comments are reported, Twitter randomly selects other viewers to weigh in on the comment by identifying the comment as ‘Abuse or Spam,’ ‘Looks OK,’ or ‘Not Sure.’ A majority vote for abuse or spam results in the user’s chat capabilities being temporarily disabled. If that user continues to post abusive content, chat is disabled for the entire broadcast.
With Twitter’s new enforcement, the accounts of repeated offenders can be permanently suspended.
Twitter suspended nearly 70 million accounts between May and June in an attempt to purge fake users and removed more than 143,000 malicious apps between April and June.