We’re starting to see why Brexit was a disaster for Big Tech

April 25, 2023, 4:20 PM UTC
Pro-Brexit demonstrators outside the Houses of Parliament, Nov. 23, 2016.
Jack Taylor—Getty Images

Brexit has been a disaster for the U.K., with most Brits now recognizing the country’s European divorce as the wrong decision. But 2016’s seismic referendum result is also playing out very badly for Big Tech.

Before Brexit took effect three years ago, companies like Microsoft and Google had plenty of run-ins with the EU’s enforcers, especially regarding antitrust violations that ended up being very costly. But at least they could take comfort in the fact that one particular violation would only earn one European mega-fine. That is no longer the case.

This morning, the British government unveiled the Digital Markets, Competition and Consumers (DMCC) Bill, which would give even more power to the U.K.’s antitrust watchdog as it tackles Big Tech. 

The Competition and Markets Authority has already been very active in the field over recent years, with recent examples of its actions being the unwinding of Meta’s Giphy purchase and the freezing of Microsoft’s Activision Blizzard takeover—but it still lacks serious fining powers. If the DMCC Bill passes as proposed, the agency will be able to levy fines as high as 10% of a company’s global annual revenue.

The EU’s new Digital Markets Act, which will come into effect next week, also enables fines as high as 10% of global turnover—so particularly egregious violations could earn a Big Tech firm a massive double whammy in Europe alone. Again, that’s twice the pain that would have been possible had the U.K. decided to stay in the EU. And the same potentially goes for violations of the General Data Protection Regulation, which is these days not only an EU law but separately also a British law, carried over from pre-Brexit days, with the same 4%-of-global-turnover maximum fine.

Meanwhile, the EU today gave companies such as Google, Meta, and Twitter the official designation of “VLOP,” which may sound like some sort of extraterrestrial-object classification but instead means “very large online platform” (or very large search engine). That means the companies will be subject to the strictest rules in the EU’s incoming Digital Services Act (DSA), regarding things like content moderation and algorithmic transparency. 

The DSA’s fines run to 6% of global turnover. Here, the U.K. is playing worse cop to the EU, with its massively controversial Online Safety Bill—similar in some ways to the DSA, with the added feature of forcing tech firms to undermine strong encryption in people’s chats—turning the dial up to 10%.

The EU and U.K. are not simply replicating each other’s approach to tech regulation—Big Tech’s cheerleaders have recently been praising the U.K.’s relatively light-touch approach to A.I., for one thing—but they’re not terribly far apart. And for the large tech companies in their sights, that means double trouble.

PS Mega-fines are one thing, but changing companies’ behavior makes a bigger impact across the globe. The CMA’s recent Meta and Microsoft actions prove that point, but for the latest example, witness the EU’s antitrust authorities reportedly forcing Microsoft to unbundle Teams from its Office suite, in response to a Slack complaint.

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

David Meyer

Data Sheet’s daily news section was written and curated by Andrea Guzman. 


Twitter’s threat to the financial system. Professors from five different universities have released a study confirming that social media contributed to the run on Silicon Valley Bank, and they say other banks have similar risks. The researchers dissected roughly 5.4 million tweets from the start of the year through March 14 about publicly traded banking stocks, finding that the risk of a bank run “increases markedly” when firms are repeatedly mentioned during “periods of intense Twitter conversation.” And posts from startup founders and VCs can have a major impact, as the study found that their tweets sharing negative sentiment had a “significant negative effect” on bank stock returns five to 15 minutes after their tweets were sent.

Bob Lee and Nima Momeni’s ties. Leading up to tech executive Nima Momeni’s arraignment today, Fortune spoke with more than a dozen people close to Cash App creator Bob Lee and Momeni to piece together how their two lives intersected. Friends and coworkers described Lee as an extrovert, often engaging with San Francisco’s “maker” community of hardware enthusiasts where he met Momeni, who is now charged with Lee’s fatal stabbing. If found guilty, Momeni faces 26 years to life. His lawyer Paula Canny said that she’d been waiting for documentation of the prosecutors’ charges as of late last week and that Momeni will likely wait to enter a plea until they have time to review the prosecution’s evidence.

The tech that triggered Germany’s probe into Chinese telecoms equipment. In March, Germany’s interior ministry announced it was checking components with security implications from two Chinese telecoms suppliers. One particular piece of tech, an energy management component from Huawei, brought the ministry’s investigation. Politico reports this after speaking to two anonymous lawmakers who were briefed by security officials. The interior ministry is asking network operators to share a list of all Chinese “security-relevant” components in a probe expected to finish in the coming months. But it’s unlikely that operators would “rip and replace” components provided by Chinese suppliers if they’re deemed too risky, as it could trigger legal disputes.


“I got off that phone call and thought, I can’t solve this problem. I will spend the rest of my time at this company trying to bail out a ship that might sink more slowly because I’m there, bailing it out. But I don’t want to spend the rest of my life bailing out a sinking ship.”

Former head of trust and safety at Twitter Yoel Roth talking about a conversation he had with Elon Musk asking him to slow down the rollout of Twitter Blue so that they would have time to hire and train more content moderators. Musk thought it should take a day to accomplish that, This American Life reports.


Why BuzzFeed’s founder reportedly turned down Bob Iger’s offer of $650 million—10 years before killing his news division and laying off 15% of staff, by Eleanor Pringle

‘Feel free’: Musician Grimes is okay with others using A.I. to create songs in her voice and will split any royalties with them, by Prarthana Prakash

New Coinbase court challenge adds to mounting legal battle: ‘We’re absolutely convinced the SEC is violating the law,’ by Leo Schwartz

South Korean president’s first meeting in the U.S. wasn’t with Joe Biden. Here’s why it was all about Netflix instead, by Nicholas Gordon

‘Crypto is dead in America’ thanks to regulators, says investor Chamath Palihapitiya, who once thought Bitcoin could hit $200,000, by Nicholas Gordon


Nvidia’s A.I. safety toolkit. Nvidia released an open-source toolkit to make A.I.-powered apps more “accurate, appropriate, on topic, and secure.” It’s called NeMo Guardrails, and it includes code, examples, and documentation to “add safety” to text-generating A.I. The toolkit is built to work with many generative language models to try to stop models from veering off-topic or using harmful language.

It was released today, and TechCrunch reports that Nvidia has worked on it for “many years,” realizing about a year ago that it was a good fit for models like ChatGPT. “A.I. model safety tools are critical to deploying models for enterprise use cases,” Jonathan Cohen, the VP of applied research at Nvidia said.

This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet