By Danielle Abril
Updated: January 22, 2019 2:08 PM ET | Originally published: January 16, 2019

A coalition of more than 85 activist groups sent letters to Microsoft, Amazon, and Google pressuring them not to sell their facial recognition technology to the government for surveillance.

In its letters sent Jan. 15, the coalition—which comprises groups including the American Civil Liberties Union, National Lawyers Guild chapters, and Freedom of the Press Foundation—warned the companies that a decision to supply government with the technology “threatens the safety of community members and will also undermine public trust.” That’s because facial recognition software gives the government the power to target immigrants, religious minorities, and people of color, thereby exacerbating a historical bias, according to the coalition.

“Companies can’t continue to pretend that the ‘break-then-fix’ approach works,” Nicole Ozer, technology and civil liberties director for the ACLU of California, said in a statement. “ … We are at a crossroads with face surveillance, and the choices made by these companies now will determine whether the next generation will have to fear being tracked by the government for attending a protest, going to their place of worship, or simply living their lives.”

The letter adds pressure to technology companies, whose sentiment toward selling the new technologies range from cautious to eager. In addition to being used for surveillance, facial recognition software can be used for more benign purposes like identifying celebrities in photos and videos so that media companies can more easily catalogue them.

Google, at least, appears to be treading lightly. In December, the company committed not to sell its facial recognition software until the technology’s potential dangers are addressed.

That came a couple of months after several Google employees quit in protest of the company’s $10 billion bid for a contract with the Pentagon over cloud data center services. Ultimately, Google withdrew its bid for the project, called the Joint Enterprise Defense Infrastructure cloud, citing uncertainty that the project aligned with its principles for the use of artificial intelligence. Google bans the use of its AI software in weapons and services that violate international norms for surveillance and human rights.

Similarly, Microsoft employees also pressured executives to pull its bid for the JEDI project. At the time, however, Microsoft said it had no intentions of dropping out.

In a December blog post, Brad Smith, president of Microsoft, acknowledged the risks associated with facial recognition technology and its obligation to address those concerns internally. He also pushed for government regulation in regards to the use of the technology.

And on the other end of the spectrum, Amazon continues to sell its facial recognition software to government agencies. Last year, CEO Jeff Bezos told CNN that society would eventually take care of any bad uses of the technology.

Earlier this month, the FBI reportedly began piloting Amazon software called Rekognition to help sift through surveillance footage collected during an investigation. Amazon also reportedly met with U.S. Immigration and Customs Enforcement to discuss Rekognition.

The ACLU is particularly concerned about Amazon and its technology, as a test it conducted last year showed that Rekognition falsely matched 28 members of Congress with images in a database of people arrested by police, with members of color disproportionately being identified incorrectly.

Amazon said that ACLU’s test was conducted using the software’s default “confidence threshold”, which indicates the level of confidence that the technology has identified a face somewhere in the image. Results improved when the setting was adjusted to a higher confidence level, the company said.

“When we set the confidence threshold at 99 percent (as we recommend in our documentation), our misidentification rate dropped to zero despite the fact that we are comparing against a larger corpus of faces (30x larger than the ACLU test),” Matt Wood, Amazon Web Services’ general manager of deep learning and AI, wrote in a blog post. “This illustrates how important it is for those using ‎the technology for public safety issues to pick appropriate confidence levels, so they have few (if any) false positives.”

Update: This story was updated to include a response from Amazon and to clarify potential uses of facial recognition technology.

SPONSORED FINANCIAL CONTENT

You May Like

EDIT POST