When Elon Musk last month announced he was buying the “digital town square” that is Twitter, senior European Union official Thierry Breton was quick to admonish the self-described “free-speech absolutist.” “Elon, there are rules,” Breton said. “It’s not your rules which will apply here.”
The rules Breton was referring to are embedded in a recent EU law called the Digital Services Act (DSA), which Breton, the internal market commissioner, helped design. Soon, the law will govern how Twitter must remove content from its platform—and if the U.S. firm doesn’t play ball, Breton warned, it could face mega-fines and perhaps even expulsion from Europe.
That set the stage for a big showdown, making it all the more surprising to see Musk prostrate himself before the commissioner in a video Breton tweeted on Monday.
“I really think I agree with everything you say,” gushed the Tesla/SpaceX/Boring CEO, who said the DSA was “exactly aligned with my thinking.” For good measure, he responded to Breton’s tweet with: “Great meeting! We are very much on the same page.”
But are they really? European experts aren’t so sure.
“It is very difficult to square Musk’s supposed free-speech absolutism with the DSA model, which is likely to cause serious collateral damage to online expression and access to information,” said Jacob Mchangama, founder and executive director of the Danish judicial think tank Justitia.
“I’m very skeptical because [Musk] probably does not exactly know what’s in the DSA,” said Eliška Pírková, the global freedom of expression lead at the Brussels-based digital-rights organization NGO Access Now.
In fact, very few people know what exactly is in the new law, a blockbuster piece of legislation that aims to keep Europeans safer online from 2024 onward. Its finalized text has not yet been published. The European Commission, which proposed the DSA in late 2020, only agreed to its contents with the European Parliament and the EU’s member states a few days before Musk’s Twitter takeover was announced. The text now needs to be closely checked and translated into the official languages of the 27 EU countries before it can be set in stone.
But the DSA’s broad terms are now well known, and much of it establishes bloc-wide rules for content moderation on online platforms, the subject that so exercises Musk and will take up a lot of his time if and when his Twitter purchase goes through.
Under the DSA, online platforms won’t just have to take down illegal content when someone flags it to them, but will also have to reveal how their content moderation systems work, while also notifying law enforcement if they suspect serious crimes. Serious failures to comply could cost Big Tech firms up to 6% of global annual turnover in fines.
“Very Large Online Platforms” (a formal classification within the DSA) such as Twitter will also need to regularly assess how their services might contribute to online ills like the spread of disinformation—and mitigate those risks. Platforms’ moderation systems will be subject to independent audits, and they will have to also give researchers access to their internal moderation data.
These transparency and risk-assessment rules won’t necessarily clash with Musk’s stated aims for Twitter; the tycoon has said he wants to make the company’s algorithms “open source to increase trust.”
Musk has also repeatedly said he would be happy to take down any content that’s illegal, which again fits with the EU’s DSA—as opposed to the U.K.’s Online Safety Bill, which would also force platforms to purge material that is legal but “harmful,” such as bullying comments.
However, according to Mchangama, there are two ways the DSA’s approach to illegal content may threaten the free-speech aims that Musk holds so dear. The first is a longstanding issue with the incentives that moderation laws present: It’s easier for companies to be heavy-handed and over-block people’s uploaded comments and media than it is to under-block them and risk getting fined.
Several years ago, Germany introduced a law called the Network Enforcement Act, or NetzDG, which forced social media platforms to quickly pull down hate speech and fake news, or face up to €50 million in fines. The result? “The big platforms tended to expand their definitions of prohibited speech according to their own terms, and then use increased automated moderation to remove it,” said Mchangama.
“I fear that by obliging platforms to remove illegal content, there will be significant collateral damage to perfectly legal speech, and perhaps even speech that is not particularly in a gray area.”
Then there’s the question of what content is illegal, and where.
Unlike Germany’s NetzDG and the U.K.’s Online Safety Bill, the DSA doesn’t include a definitive list of categories of illegal material. Instead, it regulates how online platforms comply with the content laws of each EU country. And there’s a lot of variation there—for example, France criminalizes Holocaust denial but Denmark does not, and Lithuania’s ban on Soviet symbols is not reflected in Germany.
“Illegal content differs very widely among the member states,” said Mchangama. And if an authoritarian leader were to win power in an EU country, he added, the DSA could give the leader an automatic “censorship machine” for purging online dissent deemed illegal.
According to Pírková, the DSA includes a crucial safeguard against this kind of weaponization: When it comes to the Very Large Online Platforms, the Commission itself will enforce the DSA, rather than leaving the job to national media regulators that could lose their independence under an authoritarian regime.
“[The DSA] introduces a more centralized model of enforcement in the hands of the European Commission,” Pírková said, adding that this was partly intended to mitigate potential problems with EU countries that are already backsliding on democratic standards, such as Hungary and Poland. (The other big reason is the EU’s recent experience with Ireland’s weak enforcement of the General Data Protection Regulation.)
But Mchangama sees danger in that approach.
“In general, centralized control, whether it’s private or public, tends to undermine free speech and access to information,” he said. “Whenever there’s a crisis, it becomes very easy for [the entity that has] centralized control to convince itself this is an existential danger.”
Mchangama is particularly worried about a “crisis response mechanism” that was added to the DSA late in negotiations to respond to the wave of disinformation that accompanied Russia’s invasion of Ukraine.
This mechanism will essentially allow the commission to declare an online state of emergency during major public security or health threats, forcing Very Large Online Platforms to deploy “proportionate and effective measures” against the threat. Little more is known about the tool at this point. “I think the underlying idea is very dangerous,” said Mchangama.
Pírková is also wary of the emergency mechanism. “We are always very cautious of hastily drafted last-minute measures like this,” she said. The Commission complied with Access Now’s demand for a sunset clause—the mechanism may only be activated for three months at most—but, said Pírková: “We still have a number of questions around the crisis response mechanism and how this truly works in practice.”
This all amounts to a wickedly complex legal landscape for Twitter’s future owner, though that is to be expected—while the U.S. has a relatively absolutist view on free speech, much of the world takes a more nuanced and varied view, and laws increasingly reflect that fact. However Musk’s free-speech stance develops, he will have little choice but to play along with those who set the rules around the world.
“Companies take their obligations very seriously, and just because the context is different in the U.S., that doesn’t mean they’re not going to comply with their obligations in the EU,” said Shóna O’Donovan, an associate in the technology regulation team at law firm Covington.
Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.