Europe’s privacy regulators call for a ban on facial recognition in publicly accessible spaces

Europe’s privacy regulators have called for a full ban on facial recognition systems monitoring people in all publicly accessible spaces—even shops and stadiums.

A couple months ago, the European Commission—the union’s executive body—proposed a regulation that would place strict safeguards on the use of artificial intelligence, with likely implications for global A.I. rules. The expansive rulebook covers a lot of ground, but facial recognition, a touchy subject in much of privacy-conscious Europe, is a key focus.

The Commission’s proposal classifies so-called real-time remote biometric identification systems—including facial recognition systems—as high-risk, meaning people need to be given clear information about how they’re being watched, and there needs to be good security and oversight for the data. Law enforcement would also be generally banned from using such systems, with some exceptions.

That’s not nearly strict enough, said the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) on Monday. (The EDPB comprises all the EU’s national data-protection authorities; the EDPS is a watchdog that monitors the Commission and other EU institutions, and would also be on the new European Artificial Intelligence Board that the regulation would create.)

“Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places,” said EDPB chair Andrea Jelinek and EDPS Wojciech Wiewiórowski in a joint statement.

“Applications such as live facial recognition interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms. This calls for an immediate application of the precautionary approach.”

“Necessary starting point”

The EDPB and EDPS don’t get to write or modify laws. However, they are the EU’s top advisers on privacy issues, and their unequivocal intervention will send a very strong message to the bodies that will spend the next year or two thrashing out the A.I. regulation’s final form—namely the Commission, the European Parliament and the Council of the EU, which represents national governments.

That message: Under EU law, A.I. threatens EU citizens’ fundamental rights and needs to be tightly and broadly reined in as soon as possible.

“A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for A.I.,” said Jelinek and Wiewiórowski.

It’s not just facial recognition, either—the privacy regulators called for “a general ban on any use of A.I. for automated recognition of human features in publicly accessible spaces, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioral signals, in any context.”

They also recommended banning A.I. systems that use biometrics to “categorize individuals into clusters based on ethnicity, gender, political or sexual orientation”—this would break EU anti-discrimination law. This has also been a hot topic in the U.S. recently; companies such as Microsoft, Amazon and IBM have stopped selling facial-recognition systems to law enforcement there, due to fears over racial profiling.

Under the EU watchdogs’ recommendations, companies hawking A.I. systems that infer emotions would also find their wares banned from publicly accessible spaces in the EU, except in certain contexts such as health care. The Commission’s proposal would allow emotion-inferring A.I. systems, but with strict controls.

The Commission and the privacy regulators appear to have the same view of social-scoring systems such as the ones being used in China: burn them with fire.

“The proposed regulation should also prohibit any type of use of A.I. for social scoring, as it is against the EU fundamental values and can lead to discrimination,” the EDPS and EDPB said.

“True ban”

Privacy campaigners, such as those at the European Digital Rights (EDRi) advocacy network, have previously criticized the proposed A.I. regulation as leaving the door open for discriminatory surveillance.

“Today’s opinion is clear proof, from the EU’s top data protection regulators, that facial recognition in publicly accessible spaces is a grave and disproportionate intrusion into people’s rights and freedoms,” EDRi policy adviser Ella Jakubowska said.

The regulation would create a single set of rules for the whole EU. According to Jakubowski, this provides an opportunity for a “true ban” on facial recognition in publicly accessible places, where “the many exceptions and loopholes in the current regulatory framework have allowed intrusive and invasive biometric mass surveillance to proliferate in almost every EU country, by law enforcement, other public authorities and private companies.”

“The Council of the EU and the European Parliament must listen to this incontrovertible advice as they develop their positions on the EU’s Artificial Intelligence Act,” she said.

“We very much welcome the EDPB and EDPS having issued a clear statement on the proposed A.I. Act, stressing the need to clarify certain central aspects of the proposal and to better define the red lines it is supposed to draw,” said AlgorithmWatch, a Germany-based campaigning group. “We share the two institutions’ view that some specific uses of A.I.-systems are inherently incompatible with fundamental rights and that this includes the use of biometric recognition systems in public or publicly accessible spaces, which can enable forms of mass surveillance that can never be conducted in compliance with fundamental rights.”

Anna Cavazzini is the chair of the European Parliament’s internal market and consumer protection committee, IMCO, which is likely to play a major role in Parliament’s handling of the bill. The German Green welcomed the call to ban live facial recognition in public spaces.

“Civil society has long fought for a clear ban on facial recognition in public spaces which the Commission’s proposal unfortunately does not include,” Cavazzini said. “It is essential that legislation on A.I. is in line with European provisions on data protection and our fundamental rights and that it guarantees effective enforcement within the EU.”

“Red tape”

Guido Lobrano, the Europe chief at the U.S. Information Technology Industry Council (ITI)—whose members include the likes of Amazon, Google and Facebook—said his organization shares the regulators’ “goal of striving to ensure that individuals’ fundamental rights are respected.”

“We hope to have robust conversations with EDPB and EDPS moving forward on the potential uses and potential harms of A.I. and how these can be best addressed while allowing innovation to continue to flourish in the EU,” Lobrano said.

The Computer and Communications Industry Association (CCIA), which also lobbies in Europe on behalf on Big Tech, did not offer comment on the watchdogs’ recommendations.

A spokesperson instead pointed to an April statement which reacted to the Commission’s A.I.-regulation proposal by calling for the avoidance of “unnecessary red tape for developers and users.”

Update: This article was updated on June 22 to include Lobrano’s statement.

Subscribe to Fortune Daily to get essential business stories straight to your inbox each morning.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward