Researchers at Yahoo (yes, for the moment, it’s still Yahoo) have unveiled an algorithm that uses machine learning and natural language processing to detect online abuse and hate speech. Abusive behavior online has been in the limelight lately, both because it’s so inherently vile, and because it could alienate users of platforms like Twitter (TWTR) and Yahoo (YHOO), arguably threatening their bottom line, or even the entire digital economy.
Most such platforms use a combination of user reporting, keyword filtering, and monitoring by legions of trained humans to detect and block trolls and harassers. But filters are easy to work around through creative spelling (the example “kill yrslef a$$hole” pops up early in the researchers’ report).
Get Data Sheet, Fortune’s technology newsletter.
Slurs and insults also shift rapidly, making blacklists ineffective, while some more subtle abuse can be expressed without any single objectionable word. All of that – plus the likelihood of false positives from sarcastic or satirical posts—makes the problem a thorny one for artificial intelligence.
The Yahoo researchers set their AI to evaluate a set of messages already flagged as abusive for common traits. The comment dataset came from Yahoo! Finance and News, which you wouldn’t think of as exactly the dank basement of the internet—but it turns out a whopping 7% of comments on Finance and 16.4% on News were deemed abusive by human screeners.
The program trained itself by scanning those comments for specific sequences of characters, which helped it catch non-standard spellings of offensive words. The processor also tracked linguistic features like comment length, use of capital letters, and punctuation style. It could even parse so-called “dependencies” to find complex phrases that added up to abuse.
The program was then tested by comparing its judgment to the majority opinion of human screeners. At its best, researchers found that their model was more accurate than prior models by a substantial margin, matching human judgment in as many as 90% of its classifications.
For more on the problem of online abuse, watch our video.
What’s most interesting about the results is that the model was most effective when its ‘training’ was updated with new data over time, indicating how fluid online abuse is. In fact, while larger data sets produced better results, even using a much smaller but more recent comment database led to fairly accurate results, which could be an important finding from an efficiency perspective.
The researchers have said they will soon make their datasets available through Yahoo’s Webscope program. However, that database is explicitly available for use only by non-commercial researchers—which means this work may wind up being a part of Yahoo that’s actually worth something to its new owners.