Twitter is removing the much-coveted verified status of users who post racist and hateful material, the company said Wednesday.
The decision to remove the verification—displayed on user profiles as a blue badge with a check mark to signify significant “public interest”— comes amid intense criticism of the company’s policing of posts on its service. Over the past few days, critics had slammed Twitter for verifying the account of Jason Kessler, who helped organize a high-profile white nationalist rally in Charlottesville, during which three people were killed amid widespread violence.
Twitter (TWTR) said it is now reviewing an unspecified number of verified accounts and will remove the verified status if it discovers that people violate its new guidelines. Those include sending tweets that promote hate or violence against people based on their race, sexual orientation, and religion, among other criteria.
Several users, including Kessler and white supremacist Richard Spencer, have already complained on Twitter about their losing their verified status.
Twitter is taking the step as an intermediary measure that falls short of deleting or suspending the accounts of users who violate its rules. Twitter did not say why it chose to remove the verified status as opposed to suspending the Twitter accounts.
In July 2016, Twitter opened its verification process so that people who were not previously recognized by Twitter could gain the status. Doing so, however, has led to the perception that Twitter personally endorses the behaviors of its verified users, the company said.
Get Data Sheet, Fortune’s technology newsletter.
“We gave verified accounts visual prominence on the service which deepened this perception,” Twitter said. “This perception became worse when we opened up verification for public submissions and verified people who we in no way endorse.”
Twitter said it is “working on a new authentication and verification program,” but it didn’t say when the process would debut. Until then, Twitter will not verify users.