After Clearview, more bad actors in A.I. facial recognition might show up

March 23, 2021, 2:34 PM UTC
facial recognition scan
Facial recognition services like Clearview AI and PimEyes are drawing concern over future privacy protections.
John M Lund Photography via Getty Images

Hi, readers. Senior tech writer Aaron Pressman here, filling in for Jeremy. (To my Data Sheet subscribers of Data Sheet: hi again!)

Because the controversial Silicon Valley facial recognition startup Clearview A.I. is based in the United States and backed by U.S. investors, it’s subject not only to U.S. law but also the kind of public pressure that major media such as the New York Times can bring to bear. But what if a copycat tried to set up a similar service that anyone could pay to use, including stalkers, identity thieves, and whoever else? And what if they set up it up far beyond the reach of U.S. law and major media?

A couple of quick Google searches turn ups many possible candidates. Berify is aimed at helping artists and other creatives find pirated images, not faces. Social Catfish, as the name implies, is targeted at online daters worried about the real identity of the person they just swiped right on.

And then there’s PimEyes, a “mysterious new site” mentioned by the Times recently in its deep dive on Clearview. On first glance, PimEyes appears to be much like Clearview, promising to use its A.I-fueled algos to match an uploaded photo with images from all over the web and social media for a fee. And it’s open to anyone. I uploaded a picture snapped from my Mac’s webcam and it quickly returned a dozen accurate matches of me from across the Internet—and a bunch that were not me.

The web site says PimEyes is administered by Face Recognition Solutions Ltd., with this address: House of Francis, Room 303, Ile Du Port, Mahe, Seychelles. You know, Seychelles, the island chain off the coast of East Africa that is located more than 8,000 miles from the United States.

I corresponded a bit with the proprietors of the service. The owners bought the technology from a Polish startup in 2019, but declined to identify themselves further. The goal is to throw open the gates of A.I. facial recognition, they say:

“We truly believe that it is necessary to democratize facial recognition technology. Every person has the right to find themselves on the Internet, protect their privacy and defend themselves against scammers, identity thieves, or illegal usage of their image. Face recognition technology should not only be reserved for corporations or governments.”

The company notes that per its terms of service it is designed only to help a customer find their own image out on the Internet, not to allow for people to search for unknown persons. Search results include only the site where an image was found, not any personal data about the person pictured.

It does seems from the company’s pricing page that you might have to pay $80 a month to have your images blocked from the company’s search system. But my unidentified company rep points to a form on the web site where anyone can request an image to be deleted for free.

So perhaps for now the Pandora’s Box of A.I. facial recognition remains closed. For how much longer, nobody knows.

Aaron Pressman
@ampressman
aaron.pressman@fortune.com

A.I. IN THE NEWS

A popular Japanese online motorcycle enthusiast was not who she appeared to be. In posted pictures, Twitter user @azusagakuyuki looked like a young women riding a fancy Yamaha bike. But she was actually a 50-year-old man using FaceApp and other software to disguise his appearance. A reflection in the motorcycle's side mirror and an unusually hairy arm gave the man away.

The European Commission's proposed rules for "high risk" A.I. programs may have a big hole in their standards. Politico reports that the rules fail to address the potential for racial, gender and other types of bias that have commonly cropped up in A.I. systems in the United States. “We shouldn't see the issues of the potential harmful impact on racialized communities through tech as a U.S. issue," Sarah Chander of digital rights group EDRi tells the publication. "It's going to be wherever you find manifest structural discrimination and racial inequality."

Stanford University's Human-Centered Artificial Intelligence Institute released its annual data dump of A.I. developments. The pandemic failed to slow investment in A.I., as most companies reporting kept their dollars flowing steadily or increased their spending on A.I. projects. Jobs postings in the U.S., however, did decrease 8% from 2019 to 300,999 jobs. Journal postings and (virtual) conference attendance were up.

So-called soft-bodied robots, those made from flexible components that can reshape the machine for various tasks, may get a boost from a newly developed deep-learning technique. Because the robots can take on an almost infinite number of shapes, it can be hard to program them for adopting the most efficient shape for the job they're assigned. MIT researchers used a neural network to plan where sensors should be placed to help the robots adopt the best shape for a particular task.

EYE ON A.I. RESEARCH

With so many companies making so many promises to improve their environmental impact, who can keep track? A.I. to the rescue. A trio of European researchers has developed a deep neural language-based system called ClimeBert to read through thousands of corporate disclosures and assess how serious the companies are about taking action. The program found that, unfortunately, most of the voluntarily included language was boilerplate-copied or adopted from the Task Force for Climate-related Financial Disclosures, or TCFD. The study concludes sterner regulation is required to force companies to report true climate risks to their businesses.

In analyzing the disclosures of TCFD-supporting firms, ClimateBert comes to the sobering conclusion that the firms’ TCFD support is mostly cheap talk and that firms cherry-pick to report primarily non-material climate risk information. From our analysis, we conclude that the only way out of this dilemma is to turn voluntary reporting into regulatory disclosures.

FORTUNE ON A.I.

Israeli startup raises $18.5 million to train A.I. with fake data By Jeremy Kahn

Why Russia is cracking down on social media By Daria Solovieva

U.S. military to test whether jetpacks are ready for the battlefield By Jackie Snow

Online sign-ups made the U.S. vaccine rollout less fair. Here’s how to fix them By David Z. Morris

How Intel’s new CEO can revive the chipmaker’s fortunes By Aaron Pressman

Spark Capital steps away from its investment in Dispo By Lucinda Shen

BRAIN FOOD

My wife and I made a double comic book-movie blunder last weekend. With all the hype around Zach Snyder's recut of the 2017 flop Justice League, we decided to watch both new and original versions to compare and contrast. We slogged through the Whedon movie, barely making until the end. A little silly, a lot incoherent. But the dark and humorless Snyder cut? We only made it about halfway. I can't even say which version was worse, but at least Whedon's was quick.

That had me thinking there must be a better way to go about comic-book movies. Researchers at Dalian University of Technology in China and City University of Hong Kong have an idea to reverse the flow from comic books to movies: Their A.I. system tries to turn movies into comic books. First a "key frame extraction" system tries to grab the most important images from a movie and turn them into comic-book panels. Then the system translates speech from the movie into text bubbles for the comic. The researchers picked some non-action movies and a TV show for their initial video-clip conversion tests: Titanic, The Message, Friends, and Up in the Air. But a panel of viewers found the comic books better than those produced by other automated systems. That seems like a pretty low bar, though. Maybe I'll stick with reading human-drawn comic books and watching non-comic book movies.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet