Microsoft announced on Tuesday that it had made some significant improvements to its biased facial recognition technology, the company wrote in a blog post.
A study carried out by MIT researchers earlier this year found that Microsoft’s technology had an error rate of 20.8 % when identifying women with darker skin tones. The MIT study also found that similar gender-classification technologies offered by IBM and China-based AI giant Megvii also stumbled when identifying darker-skin women compared to lighter-skin men.
Presumably, the software company didn’t have a diverse enough data set that included as many photos of women with darker skin pigmentation compared to lighter-skin men when developing the facial recognition software, which led to the bias in the technology. Microsoft’s Face API team made three changes to its development process to work toward fixing the issue.
In addition to incorporating more diverse datasets in its development process, the company blog post notes that Face API “launched new data collection efforts to further improve the training data by focusing specifically on skin tone, gender and age, and improved the classifier to produce higher precision results.”
Read More for an In-Depth Look: Unmasking A.I.’s Bias Problem
According to Hanna Wallach, a senior researcher in Microsoft’s New York research lab: “If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases.”
Microsoft’s latest update to its gender-classification tech echoes a similar update IBM made to its own technology earlier in the year. IBM also re-emphasized on Wednesday that it plans to release a public, annotated data set intended to help researchers improve the accuracy of facial-recognition technology. It should be noted that despite their efforts to improve their technologies, both Microsft and IBM’s gender-classification systems still more accurately identify lighter skin men versus darker skin women, thus underscoring the challenges corporations and researchers face weeding out biases in their AI classifier technologies.
Megvii has not publicly commented on whether it intends to improve the accuracy of its gender-classification technology as a response to the MIT study.
While Microsoft’s efforts to fix its facial recognition technology is a noble cause, activists and researchers are still concerned with the potential for the company, along with other tech giants, to provide its AI technology for possible policing and surveillance to government agencies. As Microsoft’s Research Director Eric Horvitz recently told Fortune, Microsoft is still considering the ethical issues that might arise when using face-recognition software in “sensitive areas like criminal justice and policing.”
The company has yet to taken a firm stand on the issue and employees recently wrote an open letter to the CEO demanding that the company end its $19.4 million contract with ICE “for processing data and artificial intelligence capabilities,” the New York Times reported.
“As the people who build the technologies that Microsoft profits from, we refuse to be complicit,” employees wrote in the letter. “We are part of a growing movement, comprised of many across the industry who recognize the grave responsibility that those creating powerful technology have to ensure what they build is used for good, and not for harm.”