Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

Why Mark Zuckerberg’s Not Apologizing For That Fake Nancy Pelosi Video—Data Sheet

June 27, 2019, 2:28 PM UTC

This is the web version of Data Sheet, Fortune’s daily newsletter on the top tech news. To get it delivered daily to your in-box, sign up here.

Aaron in for Adam today. You may have read over the past day that Facebook CEO Mark Zuckerberg apologized for the fake Nancy Pelosi video that went viral on the social network in May and maybe even that he said it was a mistake not to remove it immediately, as Google’s YouTube did. No such luck. If you watch Zuck’s lengthy answers to questions from Harvard Prof. Cass Sunstein at yesterday’s Aspen Ideas Festival event (on the video posted here), you’ll see that’s not quite what he said.

To recall, the clip relied on traditional, age-old video editing techniques, like slowing down passages, to make Pelosi appear to be slurring her words as if she was drunk. Shared on Facebook to undermine the reputation of the Speaker of the House, it was viewed over 2 million times. Pelosi blasted Facebook for not taking down the video, saying the company was “lying to the public.”

At Wednesday’s event, Sunstein started off asking a basic question: “Why oughtn’t the policy be as of, say, tomorrow that if reasonable observers could not know that it’s fake, that it will be taken down and disclosure isn’t enough?”

But to Zuckerberg, who has to oversee the work of some 30,000 fact and safety content reviewers, there was too much grey area in the case of the doctored Pelosi video. “We don’t think it should be against the rules to say something that happens to be false to your friends,” he began. “People get things wrong. I don’t think people would want us to be censoring that and saying that it is against the rules on this service to write something that is factually inaccurate.” Instead, Facebook’s duty is only to prevent false information from spreading quickly, he argued. If content reviewers mark something as false, “we prevent it from getting any significant amount of distribution…and we also mark it as false in the service, so anyone who sees that content sees that the content is marked as false and we show related content that is more accurate.” The mistake in the case of the Pelosi video was that Facebook’s fact checkers were too slow to tab the clip as false. “That was an execution mistake,” Zuckerberg conceded.

Techie that he is, Zuckerberg had a different answer for so-called deep fakes. Those are the uncannily realistic fake videos made using artificial intelligence programs (like this one of former President Barack Obama). In those cases, maybe Facebook would take down the clips immediately, Zuckerberg said. “I definitely think there’s a good case that deep fakes are different from traditional misinformation,” he said. The company is currently reviewing its policy.

The divergent answers lead to a world that’s easier for Facebook to police, but not so great for society. All I need to do is gin up my bogus video using old-fashioned techniques and it stays up, but if I use A.I. it gets taken down? I think the Russians might be able to figure that one out. There ought to be a better way, perhaps with a rapid escalation process from frontline content reviewers to a more discerning panel of experts? Earlier in the Aspen appearance, Zuckerberg called for more government regulation of the Internet, saying there were some decisions that “I don’t think people would want private companies to be making by themselves.” In the case of fake videos, that may just be the case.

Aaron Pressman


Inclusion. The organizers of San Francisco Pride decided not to ban Google from the annual Pride Parade on Sunday, despite receiving a letter from almost 100 Google employees concerned about how the company handled hate speech. Google came under fire this month for refusing to remove homophobic videos on YouTube targeting a journalist. Instead, YouTube banned hate speech and demonetized the channel, run by pundit Steven Crowder. In its next controversy, Google was sued on Wednesday over  a program that used medical patient data shared by the University of Chicago Medical Center.

Under a microscope. After already taking a hit from U.S.-China trade tensions, chipmaker Broadcom got more bad news on Wednesday, with the start of an antitrust investigation by the European Union. Regulators are looking at whether Broadcom's dominance of chips for TV set top boxes and broadband modems harmed competition. Elsewhere in Europe, the EU's High-Level Expert Group on A.I. recommended banning the use of the technology for scoring citizens and limiting its use for mass surveillance.

Just playing around. The three leading video game console makers, Sony, Microsoft, and Nintendo, sent a letter to the Trump administration asking that their products be exempted from tariffs imposed on goods made in China. Consumers would have to pay $840 million more for their gear under the tariff plan, the letter explained.

Jumping ship. As it becomes more likely that Apple will dump Intel chips from some or all of its computers comes news that the company has hired one of the stars of microprocessor design. Mike Filippo joined Apple last month, leaving ARM Ltd. He's previously designed chips at Intel and Advanced Micro Devices.

Moving in the wrong direction. The problem of municipalities being attacked by hackers with crypto-ransomware is growing. After Baltimore and Atlanta suffered massive losses to their internal computer systems, the attacks are spreading to smaller cities that are paying ransoms to get their data back unharmed. Lake City, Florida, this week agreed to pay 42 bitcoins, worth about $500,000, to get its data unlocked.


The world is increasingly being encircled by an array of highly capable yet inexpensive satellites, as we've discussed many times previously. Christopher Bream in MIT's Tech Review examines what this might mean for privacy. Eventually, there could be an "eye in the sky" hovering over almost everyone, at all times.

Every year, commercially available satellite images are becoming sharper and taken more frequently. In 2008, there were 150 Earth observation satellites in orbit; by now there are 768. Satellite companies don’t offer 24-hour real-time surveillance, but if the hype is to be believed, they’re getting close. Privacy advocates warn that innovation in satellite imagery is outpacing the US government’s (to say nothing of the rest of the world’s) ability to regulate the technology. Unless we impose stricter limits now, they say, one day everyone from ad companies to suspicious spouses to terrorist organizations will have access to tools previously reserved for government spy agencies. Which would mean that at any given moment, anyone could be watching anyone else.


The Internet Is Different Depending Where You Live. But It Doesn’t Have to Stay That Way By Glenn Fogel

Amazon's New Store for Beauty Professionals: How Much of a Threat? By Kate Dwyer

Federal Cybersecurity Failures Include a 48-Year-Old System Few People Knew How to Use By Alyssa Newcomb

Most States Still Enforce Noncompete Agreements—And It's Stifling Innovation By Ellen Rubin

'Silex' Malware Renders Internet-of-Things Devices Useless. Here's How to Prevent It By Xavier Harding

EBay Is Crashing Amazon Prime Day 2019 By Don Reisinger

How Verizon-Owned Wireless Service Visible Is Getting More Appealing By Aaron Pressman


Remember the choose-your-own-adventure books? As the plot moved ahead, the reader had to choose a course for the protagonist. The book would then direct the reader to different subsequent pages depending on the choice. Now someone on has ingeniously revived the medium–on Twitter. And it's about being Beyonce's assistant for the day. Too fun. I only made it three questions deep before being fired. How far can you go?

This edition of Data Sheet was curated by Aaron Pressman. Find past issues, and sign up for other Fortune newsletters.