The fight against nonconsensual sexual deepfakes is slowly getting somewhere

Taylor Swift performs onstage during "Taylor Swift | The Eras Tour" at Veltins Arena on July 17, 2024 in Gelsenkirchen, Germany.
Taylor Swift.
Andreas Rentz—TAS24/Getty Images for TAS Rights Management

People worry a lot about the various flavors of havoc that AI might wreak in the future, but one of the technology’s here-and-now problems is that of deepfakes, in particular those that depict actual women and girls in sexualized ways without their consent.

Public concern over the issue exploded earlier this year when people used generative AI tools to create explicit content featuring Taylor Swift. This kind of thing wasn’t really new—faked pornographic images have been a problem ever since Photoshop became a thing—but AI apps can make it particularly easy to do, and Swift’s unusually high profile ensured widespread consciousness.

Today, Meta’s independent (but Meta-funded) Oversight Board announced its findings on two deepfake cases that it took on in April. One was about an image on Facebook depicting a female American public figure in the nude, being groped—it is not clear if this figure was Swift, as the board didn’t name the people concerned. The other image, on Instagram, showed a female Indian public figure in the nude, depicted from behind.

The Board, which is sometimes referred to as Meta’s “Supreme Court,” gets the final say when it comes to specific cases such as these. It said today that Meta had done the right thing by taking down the picture of the American figure, despite the protestations of the poster. However, Meta had initially ignored users’ complaints about the image of the Indian figure, only taking it down when the Board took on the case. Here, the Board decided Meta had been in the wrong.

Beyond specific cases, the Board can only make non-binding recommendations to Meta about its future content moderation. So, Meta will now have to make up its own mind about the Board’s deepfake recommendations, which mostly center on the company’s policy banning “derogatory sexualized photoshop.” The Board reckons the word “derogatory” should be replaced by “non-consensual”, and the work “photoshop” by something that reflects the changing nature of the tech that people are using to make these images. The Board also wants Meta to start using the fact that content is AI-generated or AI-manipulated as a signal for lack of consent in its “Adult Sexual Exploitation” policy.

Meta said it would review the recommendations.

Of course, a social media company can’t do much more to those creating or sharing such content other than to ban them. So it’s also notable that, yesterday, the U.S. Senate unanimously passed a bill that would let people portrayed in sexually explicit deepfakes sue the creators.

The Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE) Act would give victims the ability to claim up to $150,000 in damages, plus an extra $100,000 if the deepfake was connected to “actual or attempted sexual assault, stalking, or harassment.” It has a companion bill waiting in the House, and Senate Majority Leader Chuck Schumer (D-N.Y.) urged his counterparts there to take it up. “By passing this bill, we are telling victims of explicit nonconsensual deepfakes that we hear them and we are taking action,” he said. But the House doesn’t have much time, as the August recess is just over a week away.

Over in the U.K., the sharing of such content was already criminalized in last year’s Online Safety Act. The former Conservative government was set to make creation a crime too, but all new bills were dropped when the Conservatives called the election that led to their ouster at the start of this month. Nonetheless, the new Labour government has also said it wants to criminalize the making of nonconsensual sexualized deepfakes—Deputy Prime Minister Angela Rayner has herself been a victim. Labour’s advisers have also recommended banning apps that are dedicated to making such imagery.

It’s unclear how easy these laws will be to enforce, especially as the proliferation of open-source AI models makes it easy for people to set up “nudification” tools without the guardrails that AI providers like OpenAI put in place. But the threat of severe consequences, and action by social media companies, will at the very least make it clearer to people that this kind of behavior is socially unacceptable.

More news below.

David Meyer

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

NEWSWORTHY

Tech stock slide. The Nasdaq continued to slide this morning after a brutal Wednesday that saw the Nasdaq Composite drop 3.6%—its worst daily performance in nearly two years. Given the outsized weight of tech in the general markets, yesterday’s slump also brought down the S&P 500 by 2.3%. Tesla’s weak earnings were a clear culprit, as Fortune’s Shawn Tully explains here, but there’s also a strengthening narrative that AI hype has overvalued stocks, and the companies driving that hype are yet to demonstrate that their massive investments are paying off. Stay tuned for Microsoft and Meta’s results next week.

IT catastrophe cost. Last week’s CrowdStrike IT catastrophe cost Fortune 500 companies an estimated $5.4 billion, according to insurance firm Parametric, the Guardian reports. That figure does not include the cost of the incident to Microsoft, whose Windows platform played host to the widespread crashes caused by the cybersecurity firm’s badly vetted update. Reuters reports that Air France KLM expects to have lost $11 million because of the incident. CrowdStrike has reportedly offered its partners a $10 Uber Eats gift card by way of apology.

Apple Maps on the web. Apple has finally launched a public beta of a web version of Apple Maps, a mere dozen years after the company’s Google Maps competitors landed on the iPhone. As TechCrunch notes, this means developers can finally link out to Apple Maps on the web, which should broaden the service’s appeal.

ON OUR FEED

“Are we really doing this again?”

Meta CEO Mark Zuckerberg reacts to Elon Musk saying he would “fight Zuckerberg any place, any time, any rules.” Musk challenged the far fitter Zuckerberg to a cage fight last year, but it never happened.

IN CASE YOU MISSED IT

Exclusive: Carta’s COO to leave the company after a year, by Jessica Mathews

NBA scores $76 billion deal with Amazon, Comcast, and Disney, by Bloomberg

Tesla and Alphabet earnings disappoint, triggering sharpest market decline in 2 years, by the Associated Press

IOC approves creation of Olympic Esports Games, by Chris Morris

What CIOs and CTOs plan to do differently after CrowdStrike’s massive tech outage, by John Kell

Michael Dell performed a ‘hard reset’ of his company so it could survive massive industry shifts and thrive again. Here’s how it’s done, by Rebecca Homkes

BEFORE YOU GO

4K in space. If you’re going to take people to the Moon again, you’re going to want to broadcast the evidence in sweet ultra-high definition—and NASA’s researchers have now demonstrated that they can do just that. As The Verge reports, the researchers used infrared laser communications to stream 4K footage from a plane to the International Space Station and back. NASA hopes to put people back on the Moon in 2028.

This is the web version of Fortune Tech, a daily newsletter breaking down the biggest players and stories shaping the future. Sign up to get it delivered free to your inbox.