In early June, civil liberties groups voiced alarm at a leaked memo showing the U.S. Justice Department had authorized the Drug Enforcement Administration to conduct covert surveillance and other investigations of those protesting the police killing of George Floyd.
The document, obtained by BuzzFeed News, doesn’t say exactly how the DEA would conduct this surveillance, but technology publication CNET speculated that the agency might deploy facial recognition technology. For its part, the DEA, along with other federal agencies, has refused to answer questions about whether it uses facial recognition and, if so, how.
In October, the American Civil Liberties Union sued the Department of Justice to force it to reveal more. The case’s progress through the courts has been delayed as a result of the coronavirus pandemic, according to a May 15 court filing.
The potential use of facial recognition technology to identify those involved in the George Floyd protests is a sensitive topic because studies have demonstrated that most such systems are less accurate when trying to identify dark-skinned people. One reason is that camera technology isn’t good at light-balancing for different skin tones, especially if they are present in a single frame. But, more significantly, blacks are underrepresented in the data sets used to train facial recognition algorithms.
IBM, which had been spearheading efforts to create more inclusive facial data sets to train facial recognition algorithms, said on June 8 that, in the wake of the George Floyd protests, it would stop marketing, developing, and researching facial recognition technology and would actively oppose its use for mass surveillance or racial profiling.
Amazon said Wednesday that it was instituting a one-year moratorium on selling its facial recognition software, which is called Rekognition, to U.S. police departments.
The following day, Microsoft president Brad Smith said the company would also cease selling facial recognition technology to U.S. police departments “until we have a national law in place, grounded in human rights, that will govern this technology.”
On top of this, the coronavirus has thrown in an additional factor with the potential to derail companies selling facial recognition systems: the widespread use of face masks. Tech companies have been scrambling to retrain their algorithms to cope with people wearing masks. But whether such software can ever achieve the same accuracy as with uncovered faces is a topic of debate among researchers—and a cause for alarm among civil liberties groups already worried about the technology.
Most facial recognition software works by analyzing data points on the face and the distance between them. Cover lots of those points with a mask and suddenly the algorithm doesn’t have enough data to make an accurate identification.
That’s what happened with Apple’s Face ID system for the iPhone, for instance. Users found that when wearing a mask, Face ID no longer worked. “Face ID is designed to work with your eyes, nose, and mouth visible,” Apple said in a statement.
In late May, Apple updated its operating system, enabling users to quickly bypass Face ID and enter a numeric pass code instead to access the phone. But it hasn’t introduced a way for users to use Face ID while wearing a mask.
Some companies that sell facial recognition technology say they’ve been able to get their software to work with mask wearers. Amazon had said, prior to its decision to stop offering the software to law enforcement, that its Rekognition algorithm could cope with the added complication. China’s SenseTime—which has become one of the world’s most valuable A.I. companies through selling facial recognition software but which has also been controversial for its work with the Chinese government—says it too has tweaked its software to recognize masked people. Several other companies, including NtechLab, a Russian facial recognition firm, and Corsight, an Israeli facial recognition company, also say their technology still works if people wear face masks.
Stuart Greenfield, spokesman for Facewatch, a U.K. company that sells facial recognition software that businesses use to identify known shoplifters or pickpockets, says the firm retrained its system to work by focusing primarily on the area around the eyes and the distance between them. He says the software now achieves about the same accuracy—93.5%—that it does with an unmasked face. The only difference, he says, is that it takes Facewatch’s software a few milliseconds longer to make an identification than with an unmasked face.
But others, like Los Angeles company TrueFace, whose software is used by such customers as the U.S. Air Force, say they are still trying to gather enough images of masked people to update their algorithms.
Experts warn that how well the technology actually works depends on how it’s used. Greenfield says one reason Facewatch was able to create a system that is not stumped by masks is that it’s intended to be used only with security cameras installed indoors at head-height, where the cameras can capture a close-up, full-face, head-on view of a person in relatively stable lighting conditions. It’s not designed for picking an individual out of a large crowd in a public space—and Facewatch does not sell its technology to police departments for that reason.
Ben Ziomek, the cofounder and chief technology officer at Actuate AI, which makes software for security cameras that detects whether someone is holding a firearm, says that the problem with using facial recognition systems in busy public spaces is twofold: First, are the cameras being used at a high-enough resolution that an algorithm can capture an image of a face that is at least 300 to 500 pixels per foot. Below that resolution, he says, most algorithms struggle to make highly accurate identifications. He thinks that with the right training data and images of this quality, algorithms will be able to cope with people wearing masks.
But the second issue, he says, has to do with how many people the system is being asked to screen. Even with a 99.9% accuracy rate, a facial recognition system still has the potential to misidentify a lot of people if used in a busy public place. For instance, in New York’s Grand Central Terminal, which before COVID-19 had an average of 750,000 people passing through it each day, a 99.9% accurate algorithm would still mean 750 false positives daily and 750 people possibly drawing police scrutiny unnecessarily.
In the U.K., the London Metropolitan Police in January began using facial recognition in several busy areas of the city to spot criminal suspects. But the system was criticized by privacy advocates, such as the group Big Brother Watch, for throwing off too many false-positives—and that was before anyone was wearing face masks. In fact, in one test of the system, the police arrested a man who they alleged was deliberately wearing a mask in order to avoid being identified by the camera system.
“We now face the situation that, for many months ahead, the wearing of masks, scarves, and face coverings will be commonplace in public places in London. Even the staunchest advocates of facial recognition technology must surely accept that this is not the right time to be rolling out the use of this technology,” London Assembly member Caroline Pidgeon says. She is one of two London Assembly members who have formally asked the Met Police to stop using facial recognition. In response the police department has said it is considering “pausing” its use of the technology.
The software the Met Police is using is made by Japan’s NEC, whose software is also used by several U.S. law enforcement agencies, including Customs and Border Protection.
Benji Hutchinson, a vice president with NEC’s U.S. division, has told reporters that NEC’s software is trained on people wearing face masks, since they were often worn by people in Asia during flu season, but that “doesn’t mean it’s all perfect.”
Several groups of researchers have made data sets of people wearing masks freely available on the code repository GitHub for people to use to train facial recognition algorithms. But how these facial data sets are obtained is also a controversial issue.
For instance, in May, the ACLU also filed a lawsuit against Clearview, a controversial facial recognition startup that has marketed its software to U.S. police departments, accusing the company of violating Illinois’s stringent biometric data privacy law. The suit alleges Clearview broke the law by holding people’s biometric data without permission. Clearview has acquired many of the 3 billion facial images it claims to have by scraping Facebook, Pinterest, and other public social media sites, sometimes in violation of those sites’ service terms.
Beyond how the training data is obtained, there are also so far no standards or benchmarks to judge how well facial recognition systems for people wearing masks work.
The U.S. National Institute of Standards and Technology, which does provide the test against which most companies benchmark their facial recognition algorithms, has said that it plans to create a test for judging how well algorithms perform on masked faces. But because its projects have been delayed by the pandemic, the new test may not be ready anytime soon.
Clare Garvie, a researcher at the Georgetown University Law School’s Center on Privacy and Technology who has investigated law enforcement’s use of facial recognition, says face-mask wearing compounds her concerns about the software.
The uniqueness of facial data has never been scientifically established, she says. In addition, there’s little independently verifiable information about how accurate these systems are overall, let alone how accurate they are in any specific identification of an individual, she notes. Police departments have been reluctant to reveal this kind of technical information to defense lawyers. “There’s never been an evidentiary reliability hearing on the technology,” she says, adding that every time defense lawyers have asked for technical information on the software as part of the discovery process, prosecutors have opted to either drop or settle the case.
Most law enforcement agencies do have rules that a match from a facial recognition algorithm is alone insufficient to establish probable cause for an arrest, Garvie notes. It can be an “investigative lead,” but officers are supposed to have other evidence to establish probable cause before searching someone or making an arrest. And yet, she adds, in practice, there have been many cases in which officers have not followed these guidelines and have used the facial recognition software’s match as the sole basis for a search or arrest.
Garvie is also no fan of the use of facial recognition by stores and other businesses to prevent theft or shoplifting. She says there’s too little transparency and accountability over whose face winds up in a “watch list” data set and how that data is collected and used.
But she’s not optimistic that the widespread wearing of face masks owing to the pandemic will do anything to slow the adoption of facial recognition software. In fact, some companies selling the technology have, since the pandemic began, started marketing their software as a way to help governments or companies do contact tracing for infected individuals. “The surveillance company model is one of crisis profiteering,” she says. “And this is certainly a crisis.”
More must-read tech coverage from Fortune:
- “Not an easy decision”: How Alexis Ohanian justified his departure from the Reddit board
- IBM pulls out of facial recognition, fearing racial profiling and mass surveillance
- Stitch Fix’s new growth strategy: Letting non-clients shop directly, too
- Walmart pushes forward with new HQ plans to help staff collaborate post-pandemic
- WATCH: Ocado’s robots are out to change the grocery business