Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

Privacy, bias and safety: On facial recognition, Berlin and London choose different paths

February 2, 2020, 10:00 AM UTC
Südkreuz train station in Berlin, the site of prominent automated facial recognition trials in recent years.
Südkreuz train station in Berlin, the site of prominent automated facial recognition trials in recent years.
Thielker/ullstein bild via Getty Images

At lunchtime on a very chilly January Wednesday, most commuters at Berlin’s Südkreuz station are more interested in catching their trains than considering the security cameras that record their movements. But many have strong opinions on the facial recognition technology that could revolutionize such surveillance by comparing everyone’s faces with those of known criminal suspects.

“I think automatic facial recognition is unnecessary and should be rejected. There are too many failures—too often a person is identified even though they aren’t a criminal at all,” complains Dietmar Lutze-Stallmann, 71. Janina Kirn, 35, takes the opposite view. “It can hinder major crimes. We live in times when terrorism is a major issue,” she says. “I think it makes society safer.”

It is no surprise to find these varied opinions at Südkreuz, the site of two prominent trials of automated facial recognition technology in recent years. The pilots were run by Germany’s Federal Police, which has responsibility for counter-terrorism and for security at the nation’s transport hubs, in partnership with national train firm Deutsche Bahn.

Although the police deemed the trials a success, the German government unexpectedly pressed pause on the technology’s rollout last Friday, when it deleted authorization from a legislative package on the reform of police powers.

“We think facial recognition is a very useful tool for the work of the police,” a spokesperson for the interior ministry tells Fortune. “We just wanted to make clear that there is acceptance among the citizens for this technology. We want to make sure the debates are carried out now, before we pass the law.”

By coincidence, on the same day London’s Metropolitan Police announced that it would be rolling out the technology, despite a raging debate in the U.K. about its implications on people’s rights.

“As a modern police force, I believe that we have a duty to use new technologies to keep people safe in London,” said assistant Met commissioner Nick Ephgrave in a statement at the time. “Independent research has shown that the public support us in this regard.”

Privacy and bias

The issues around facial recognition technology are the same in Berlin as they are in London, or in San Francisco, where its use by police was banned last May. Or, for that matter, in Moscow and Beijing, where deployments are rolling out at speed despite local concerns.

A sign to make people aware of surveillance measures at Berlin's Südkreuz train station.
A sign to make people aware of surveillance measures at Berlin’s Südkreuz train station.
Thielker/ullstein bild via Getty Images

Privacy is the most obvious issue, as the technology can hugely increase the potential for indiscriminate and largely invisible mass surveillance. But technologists and civil liberties advocates alike also warn of the racial and gender bias that can result from the systems’ A.I. being trained predominantly on images of light-skinned men, leaving women and people of color more vulnerable to incorrect identification.

Notably, Microsoft last year rejected a request from a Californian law enforcement agency to use its facial recognition technology, because of the bias issue. And it also turned down an unidentified country’s request to use the tech in its capital’s camera network, because that country would probably have used it to suppress human rights.

In both Germany and the U.K., activists fought hard against the rollout of facial recognition by the police—obviously with very different results. London-based Privacy International responded to the Metropolitan Police announcement by warning of “a radical threat to our freedoms which threatens to undermine democracy under the cloak of defending it.” But Berlin’s Digitale Freiheit (Digital Freedom) is in a cautiously celebratory mood after the German government took a less bullish tack.

“We’re really happy with them pausing, though we want automated facial recognition to be forbidden for German authorities in public spaces, so we’re not done yet,” says Viktor Schlüter, a Digitale Freiheit activist. “We really hope it is not a trick to wait until public pressure goes down.”

Schlüter calls for “an open evaluation of the technology to see what risks it has and how well it performs”—he is deeply skeptical of the accuracy claims made by authorities that want to deploy facial recognition, and wants to see external, independent audits of the trials.

London’s Metropolitan Police claims its internal studies “have shown that in relation to ethnicities there’s no difference in how the algorithm behaves.” The police force also maintains that its facial recognition systems only generate false positives—in which scanned faces are incorrectly identified as those on a watch list—0.1% of the time. However, Sky News last year reported on an independent evaluation of the system that showed a stunning 81% inaccuracy rate.

“Our testing showed the facial recognition systems had a minimal percentage of false positives,” says the German interior ministry spokesperson of the system trialed at Südkreuz. “From a technical point of view facial recognition systems can be implemented.”

Legality

Another major question is that of the systems’ legality under existing privacy law—particularly in Germany, where the history of surveillance by communist and Nazi regimes spurred strong restrictions in the country’s constitution, known as the Grundgesetz (Basic Law).

“The issue is whether to allow this kind of recognition or not, and what it means for the Grundgesetz,” says Sara, 34, a commuter at Südkreuz who—like almost everyone there who criticizes the technology when questioned by Fortune—refuses to fully identify herself. “It’s a question of constitutional rights when it comes to the police listening to people or watching them.”

Indeed, the German government’s sudden change of heart on the subject came days after Federal Data Protection Commissioner Ulrich Kelber said there was no legal basis for allowing automated facial recognition in public spaces. If it becomes impossible to move around without being observed, Kelber warned, people might refrain from participating in demonstrations even though they are legally allowed to do so—and that, as the Federal Constitutional Court has consistently maintained, is “a clear red line for governmental interventions,” he said.

The U.K.’s privacy regulator, the Information Commissioner’s Office (ICO), set out its own concerns in October. It said the police could legally deploy automated facial recognition, but would have to do so very carefully to stay on the right side of Europe and the U.K.’s tough privacy laws—the technology can only be used when necessary, proportionate and of “demonstrable benefit to the public.”

“We have received assurances from the [Metropolitan Police Service] that it is considering the impact of this technology and is taking steps to reduce intrusion and comply with the requirements of data protection legislation,” the ICO said in a statement when the Met announced its rollout.

The London police force told Fortune that its system was designed to automatically delete the images of people whose faces did not match those on its watch list. “Where a match is made but no prosecution follows and there is no legitimate policing purpose for retention, biometric data will only be retained for up to 31 days,” it added.

“I don’t think [the German police] will try to identify and get into every citizen’s private life,” says Dan Sirbu, 27, on one of Südkreuz’s chilly Ringbahn platforms. “I don’t think it’s a bad idea as long as it’s very well protected by law and also very well protected against cyberattacks…Let them [debate the issue] of course—every voice must be heard—but everybody must be reasonable and talk with logic, not just conspiracy theories.”

More must-read stories from Fortune:

—The long ocean voyage that helped find the flaws in GPS
—Global companies enter lockdown mode as coronavirus rocks China
—3 key takeaways from Tesla’s blockbuster fourth-quarter earnings
—Facebook says its ad machine is being weakened by privacy changes
—Predicting the biggest tech headlines of 2020

Catch up with Data Sheet, Fortune’s daily digest on the business of tech.