Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

European privacy activists launch international assault on Clearview AI’s facial recognition service

May 27, 2021, 9:33 AM UTC

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

A coalition of European digital rights groups has hit the controversial facial-recognition firm Clearview AI with a series of privacy complaints.

Clearview AI has scraped social networks and other online sources to compile a database of more than 3 billion facial images. It says its customers, which include thousands of law-enforcement agencies around the world, can search this repository for matches with their own imagery of criminal suspects, in order to identify them.

Following a series of New York Times revelations last year that exposed what Clearview AI was doing and showed how the app was being used by private individuals to spy on others, American civil liberties advocates sued Clearview AI in California and Illinois, alleging privacy violations and the chilling of free speech. British and Australian regulators are also probing its practices, as is the Italian privacy authority. In February, Canada told Clearview AI to stay off its turf.

Now, the company—which claims it does not operate in the EU—faces a fresh legal barrage.

GDPR violations

On Thursday, European rights activists filed official complaints with data-protection authorities in France, Austria, Italy, Greece and U.K., alleging violations of the EU’s tough General Data Protection Regulation (GDPR), which came into force three years ago and includes enormous potential fines.

The activist groups include Noyb—the organization fronted by Facebook nemesis Max Schrems—as well as the U.K.-based Privacy International, Greece’s Homo Digitalis, and Italy’s Hermes Center for Transparency and Digital Human Rights.

Specifically, they say Clearview AI is breaking the GDPR’s rules around lawfulness and transparency, purpose limitation—the principle that companies should only use personal data for the purposes they state up-front—and the need to appoint a representative in the EU when operating there.

In the EU, “personal data” is a broad category that includes any piece of information that can be linked to an identifiable person; as has been established by the EU’s highest court, photos of people qualify. The groups argue that Clearview AI’s processing of these images, in order to identify specific people, turns the images into the kind of sensitive personal data that’s off-limits to companies without the individual’s explicit consent.

“European data protection laws are very clear when it comes to the purposes companies can use our data for,” said Ioannis Kouvakas, a Privacy International legal officer, in a statement. “Extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users.”

In a statement, Clearview AI said the data access requests at the heart of the complaints “only contain publicly available information, just like thousands of others we have processed.”

“Clearview AI has helped thousands of law enforcement agencies across America save children from sexual predators, protect the elderly from financial criminals, and keep communities safe,” it added.

Coordination hope

By attacking Clearview AI across multiple EU countries, the groups hope to trigger a unified response from the watchdogs with whom they lodged the complaints.

Clearview AI doesn’t have any headquarters in the EU, so there is no one data-protection authority that would naturally take the lead in this situation. There have been previous orders against the company, for example when the Hamburg data-protection authority told Clearview AI to delete one person’s data, but nothing at a pan-European level.

In cross-border cases such as these, European data-protection authorities tend to coordinate their activities through a structure, established under the GDPR, called the European Data Protection Board.

The board already said last year that “the use of a service such as Clearview AI by law enforcement authorities in the European Union would, as it stands, likely not be consistent with the EU data protection regime.”

It is unclear how many law-enforcement agencies in the EU use Clearview AI, though Sweden’s privacy watchdog found earlier this year that Swedish cops’ use of the app was illegal.

Clearview AI claimed in its Thursday statement that it “has never had any contracts with any EU customer and is not currently available to EU customers.”

Update: This article was updated on May 27 to include Clearview AI’s statement.

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.