The U.S. facial recognition firm Clearview AI, which serves law enforcement agencies, has been banned from scraping images from websites in Australia and ordered to delete the data it has already collected there.
It’s the latest in a series of regulatory blows to the controversial firm—whose services are already banned in Canada. But the order from Australia’s privacy regulator is also yet another reminder of how contentious facial recognition is in general, coming as it does just after Facebook said it would stop using such technology on its social network.
Clearview AI collects pictures of people from social media platforms and other public sources, then makes the biometric information available to paying customers, who use it to find other pictures of the subjects online. This is extremely illegal in Australia, the country’s Information Commissioner and Privacy Commissioner Angelene Falk announced Wednesday.
“The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” Falk said in a statement.
“It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database. By its nature, this biometric identity information cannot be reissued or canceled and may also be replicated and used for identity theft. Individuals featured in the database may also be at risk of misidentification. These practices fall well short of Australians’ expectations for the protection of their personal information.”
The watchdog said what New York–based Clearview AI was doing was already illegal under Australian privacy law, on multiple counts. But the case nonetheless showed the law—which is under review—should be strengthened to ban the scraping of personal information from online platforms. “It also raises questions about whether online platforms are doing enough to prevent and detect scraping of personal information,” she added.
Clearview AI said in a statement that the Australian regulator “has not correctly understood how Clearview AI conducts its business,” despite the company volunteering “considerable information.”
“Clearview AI intends to seek review of the Commissioner’s decision by the [Australian] Administrative Appeals Tribunal. Not only has the Commissioner’s decision missed the mark on the manner of Clearview AI’s manner of operation, the Commissioner lacks jurisdiction,” the firm said. “To be clear, Clearview AI has not violated any law nor has it interfered with the privacy of Australians. Clearview AI does not do business in Australia, does not have any Australian users.”
The Australian regulator worked with the U.K.’s Information Commissioner’s Office (ICO) on its investigation, but only in gathering evidence—they have different laws to judge that evidence against, and the ICO has yet to release its own determination. Both countries are also probing their own law enforcement agencies’ use of Clearview AI’s services.
Canadian privacy regulator decided in February that “what Clearview does is mass surveillance, and it is illegal,” though, unlike the Australians, they did not have the power to order the deletion of people’s image data. The company had actually pulled out of the Canadian market in the summer of 2020, while that investigation was ongoing. However, in June, the Office of the Privacy Commissioner said Canada’s federal police force had broken the law by using Clearview AI’s databases while they were available.
Clearview AI is also under attack in privacy-first Europe—a territory on which it denies operating.
Back in January, the privacy watchdog in Hamburg ordered the company to delete what it had on one specific individual, who had complained his data protection rights had been violated. This was effectively a ruling over the service’s illegality everywhere in the EU, although it didn’t include an EU-wide ban. Several months later, privacy activists unleashed a flood of complaints against Clearview AI across the EU, alleging violations of the bloc’s General Data Protection Regulation (GDPR).
It is now a decade since the same Hamburg regulator (Johannes Caspar, who retired in June this year) told Facebook that its facial recognition technology—generally used to suggest name tags for people in pictures—was illegal. After the Irish data protection authority also expressed concerns about the feature, Facebook dropped it across Europe in 2012, before reviving it in 2018 with promises of legal compliance.
However, on Tuesday Facebook announced the imminent closure of its facial recognition system across the world, saying people who had opted into the feature would no longer be automatically recognized in photos, and more than a billion people’s facial recognition templates would be deleted over the coming weeks.
“This change will represent one of the largest shifts in facial recognition usage in the technology’s history,” wrote the firm’s A.I. chief, Jerome Pesenti, in a blog post. He said the technology could still be useful for identity verification and anti-fraud measures, and Meta—the new name for Facebook the company, as opposed to the specific service—would “continue working on these technologies and engaging outside experts.”
However, Pesenti said, “there are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.”
He continued, “Every new technology brings with it potential for both benefit and concern, and we want to find the right balance. In the case of facial recognition, its long-term role in society needs to be debated in the open, and among those who will be most impacted by it.”
Facebook’s move, which comes during a period of intense scrutiny over the negative societal effects of Meta’s services, was enthusiastically greeted by civil liberties activists. “This is a tremendously significant recognition that this technology is inherently dangerous,” said Nathan Wessler of the American Civil Liberties Union, while the Electronic Frontier Foundation—the world’s highest-profile digital-rights group—warned that “companies will continue to feel the pressure of activists and concerned users so long as they employ invasive biometric technologies like face recognition.”
Europe is currently in the process of debating a new A.I. Act that will likely ban law enforcement’s use of remote biometric identification technologies, such as facial recognition, in public places. The EU’s data protection supervisor, Wojciech Wiewiórowski, recently told Politico that an outright ban is more appropriate than strict safeguards. He compared facial recognition with the cloning of humans, saying he was “not sure if we are really as a society ready for that.”
Update: This article was updated on Nov. 3 to include Clearview AI’s statement on the Australian decision.
More tech coverage from Fortune:
- Apple’s recent privacy tweak cost social media giants $10 billion
- Fortnite shuts down in China, the latest foreign video game to face off with regulators—and surrender
- China’s Singles Day already tops Black Friday. Now holiday creep is making the world’s biggest shopping event even bigger
- Bugatti and Rimac: Blistering speedster and EV hypercar startup join forces ahead of a possible IPO
- Mark Zuckerberg should quit Facebook, whistleblower Frances Haugen says
Subscribe to Fortune Daily to get essential business stories straight to your inbox each morning.