IBM has pulled out of the facial recognition game—the boldest move yet by a Big Tech firm to repudiate the discriminatory use of the technology.
The company announced late Monday that it was no longer offering general purpose facial recognition or analysis software, in a letter to senators and members of Congress.
“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency,” wrote CEO Arvind Krishna.
IBM has not been the only company in the space to see a need for caution when deploying facial recognition technology, particularly in contexts in which its use might lead to the infringement of people’s rights.
In March, following controversy over a deployment on Israel’s border with the West Bank, Microsoft said it would no longer take minority stakes in companies selling facial recognition systems. Its venture arm said Microsoft would instead focus on “commercial relationships that afford Microsoft greater oversight and control over the use of sensitive technologies.” Microsoft chief legal officer Brad Smith has repeatedly called for greater regulation of the technology, and the company has, on at least one occasion, refused to sell its facial recognition system to a U.S. law enforcement agency.
Krishna’s letter addressed lawmakers including those Democrats who on Monday introduced a bill that would reform police rules, in order to combat misconduct and racial discrimination. The legislation was unveiled two weeks after the death of George Floyd, a black man killed by a white police officer in Minneapolis.
“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” Krishna wrote. “Artificial intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of A.I. systems have a shared responsibility to ensure that A.I. is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported.”
Krishna also backed elements of the new bill such as the creation of a federal registry of police misconduct and measures that increase police accountability—including, in his words, “modern data analytics techniques.”
Bias is a hot topic in the A.I. community, particularly regarding facial recognition. Researchers often find that such systems are more likely to misidentify people with darker skin, and activists say this fosters discrimination. This issue, along with privacy fears, led the University of California at Los Angeles to drop its campus-wide facial recognition plans a few months ago.
And as the activist and research organization Algorithmic Justice League noted last week, facial recognition’s use by police is a particularly live issue at this juncture—thanks to deployments at protests over police brutality.
“The use of facial recognition technology for surveillance…gives the police a powerful tool that amplifies the targeting of Black lives,” wrote the group’s Joy Buolamwini, Aaina Agarwal, Nicole Hughes, and Sasha Costanza-Chock. “Not only are Black lives more subject to unwarranted, rights-violating surveillance, they are also more subject to false identification, giving the government new tools to target and misidentify individuals in connection with protest-related incidents.”
IBM tried to combat the misidentification problem last year by releasing a data set containing a million diverse faces, in order to better train facial recognition systems. Its own systems have certainly come in for criticism over the years. Buolamwini, an MIT researcher, found in 2018 that IBM Watson’s facial recognition system had an error rate of 0.3% for identifying lighter male faces, while for darker-skinned females the error rate was as high as 34.7%.