The new tool will be available for free and will help companies and organizations that monitor child sexual abuse material review 700% more material than they are now. At present, tools like Microsoft’s PhotoDNA can help flag material on internet platforms, but only if it has already been marked as abusive. Finding new offenders requires human moderators to sort through images by hand, which is taxing both emotionally and in terms of efficiency.
Google’s tool will still require human review for confirmation, but will present the reviewer with the material most likely to be abusive, rather than requiring him or her to sort through each item. Fred Langford, deputy CEO of the Internet Watch Foundation (IWF), which is one of the largest organizations dedicated to stopping the spread of child sexual abuse material online, told The Verge that the organization’s monitors would be trying out the tool, but cautioned against trusting all the “fantastical claims made about AI,” saying the tool should only be trusted with the most “clear cut” cases.
According to the Committee for Children, child sexual abuse and assault affects one in four girls and one in 20 boys in the U.S., and the rate of abuse has plateaued after a decline in the 1990s. The British National Crime Agency recently reported that referrals of child sexual abuse material were 700% higher in 2017 than in 2012. That doesn’t necessarily mean child sexual abuse is on the rise, and could in fact mean that the reporting of abusive images is becoming more efficient. Still, the U.K. government is calling on tech platforms to “do more” to prevent child sex abuse.