Disinformation isn’t just an AI matter, as Meta’s Oversight Board just reminded everyone

US President Joe Biden speaks during a campaign rally at Pearson Community Center in Las Vegas, Nevada, on February 4, 2024.
President Joe Biden at a campaign rally at Pearson Community Center in Las Vegas on Feb. 4, 2024.
Saul Loeb—AFP/Getty Images

The rapid proliferation of generative AI tools has rightly raised concerns about deepfakes in this extraordinary election year, but Meta’s Oversight Board just reminded everyone that the phenomenon of visual disinformation does not begin and end with AI.

The independent panel of experts, which Facebook set up in 2020 as a kind of supreme court for content moderation—it has the final say on whether certain controversial calls made by Meta’s moderators are in line with the company’s stated policies—this morning upheld Facebook’s decision to leave up a deliberately misleading video that claims President Joe Biden is a pedophile.

The board said the decision didn’t contravene Meta’s policy about manipulated media—but mainly because the policy is “incoherent.”

The video in question was, by looping a particular moment of genuine footage, edited to suggest Biden was leeringly touching his (adult) granddaughter’s chest. In fact, the clip was one of him placing an “I Voted” sticker above her chest after she asked him to do so. It was captioned to call him a “sick pedophile” and his voters “mentally unwell”; versions without this caption had already gone viral several months before, at the start of 2023.

Is this manipulated media? Obviously. But Meta’s policy only forbids the posting of manipulated media that was AI-generated and that shows someone saying something they didn’t. Neither of those conditions applies here, so no foul. This seven-second video also has another condition that wasn’t met—the looping was crude and obvious, so the average person wouldn’t have been misled into thinking it was genuine—but nonetheless, the board seized its opportunity to lay into Meta’s narrow focus on AI-generated disinformation.

“Meta should extend the policy to cover audio as well as to content that shows people doing things they did not do,” the panel said. “The board is also unconvinced of the logic of making these rules dependent on the technical measures used to create content. Experts the board consulted, and public comments, broadly agreed on the fact that non-AI-altered content is prevalent and not necessarily any less misleading; for example, most phones have features to edit content.”

That wasn’t all. The board pointed out that Meta has two versions of the policy (stand-alone and as part of its misinformation community standard) that aren’t exactly aligned, and also said Meta should clarify what sort of harms its policy is trying to prevent. How soon? “Quickly, given the record number of elections in 2024,” the board said.

A Meta spokesman opted for quick-ish instead. “We are reviewing the Oversight Board’s guidance and will respond publicly to their recommendations within 60 days in accordance with the bylaws,” he said.

All this might give the impression that the Oversight Board wants to see Meta remove more content, but that’s not actually the case—instead, in cases where there aren’t some further policy violations, the board would prefer to see the content left up with labels “indicating the content is significantly altered and could mislead.” This, it argued, would “mitigate against the risk of over-removals,” while also avoiding “accusations of cover-up and bias.”

Although it isn’t obliged to follow the board’s recommendations beyond its ruling about this specific case, let’s hope Meta is listening to all this—and if not, what’s the point of the Oversight Board? More news below.

David Meyer

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

NEWSWORTHY

U.K. AI regulation. The U.K. has dropped its plans to release a voluntary code covering the use of copyrighted material in the training of AI models. The Financial Times reports that tech and creative industry executives couldn’t agree on what the code should look like. Meanwhile, the U.K. opposition Labour Party, which is widely expected to take power later this year, says it will replace a voluntary testing agreement—in which AI firms can release data from road tests of their systems—with a statutory code forcing transparency.

Huawei chip dilemma. Huawei uses one plant to make its Ascend AI chips and its Kirin chips, which power its hit Mate 60 smartphones. According to Reuters, low yield rates at the facility (a reference to the proportion of produced chips that are usable) forced the company to choose which type of chip to prioritize. AI won.

Oh, Snap. It’s Snap’s turn to announce layoffs. You know the drill: The cuts will “best position our business to execute on our highest priorities, and to ensure we have the capacity to invest incrementally to support our growth over time.” Around 10% of Snap’s workforce is affected, so, as Bloomberg estimates, that’s around 540 people.

SIGNIFICANT FIGURES

$5.2 billion

—The sum for which Yandex is selling its remaining Russian operations. Yandex (once known as “Russia’s Google”) has long had a Dutch parent, Yandex NV, which is now selling the Russian unit to a consortium led by its local management team there. Finding a buyer was tricky, given the need to avoid dealing with sanctioned individuals. Yandex NV will now rebrand.

IN CASE YOU MISSED IT

The money and drugs that tie Elon Musk to some Tesla directors, by the Wall Street Journal

‘At what point do you decide Tesla is bigger than Musk?’ The time may be right for Elon Musk to step down as CEO, suggest experts, by Christiaan Hetzner

Samsung’s billionaire chairman can lead the company without the threat of jail time after a court acquits him of stock manipulation charges, by Bloomberg

Could AI create a one-person unicorn? Sam Altman thinks so—and Silicon Valley sees the technology ‘waiting for us,’ by Paolo Confino

AI models are coming to fashion to promote diversity—but some industry insiders are concerned it will end up ‘parodying it,’ by Prarthana Prakash

Spotify signs reported $250M Joe Rogan deal two years after CEO denounced podcast host’s racist language but added, ‘I do not believe that silencing Joe is the answer,’ by the Associated Press

BEFORE YOU GO

Bitcoin trial. The Australian computer scientist Craig Wright has long claimed to be Bitcoin’s inventor, “Satoshi Nakamoto,” but now he’s being sued by a nonprofit called the Crypto Open Patent Alliance (COPA) in London’s High Court. COPA’s members, who include the likes of Coinbase and Block, are tired of Wright accusing them of infringing on his intellectual property rights.

The trial began today; it will go on for five weeks. In his opening statement, COPA lawyer Jonathan Hough attacked Wright’s claim as “a brazen lie and elaborate false narrative supported by forgery on an industrial scale,” PA Media reports (do read for the details, which involve ChatGPT and an argument over which word processor was used to write the cryptocurrency’s seminal 2008 white paper). Wright will start to lay out his side of the story tomorrow.

This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.