Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

Wall Street’s ‘Dr. Doom’ says Elon Musk is a ‘lunatic’ when it comes to social media regulation

November 6, 2022, 12:30 PM UTC
Nouriel Roubini worries about Elon Musk, a self-described “free-speech absolutist,” running Twitter.
Simon Dawson—Bloomberg/Getty Images

Nouriel Roubini believes Elon Musk’s decision to buy Twitter was “a very bad idea,” and he considers the Tesla CEO “a bit of a lunatic” when it comes to regulating—or not regulating—speech on the social network.

The New York University economist, known as “Dr. Doom” for his pessimistic economic views—he predicted the 2008 housing bust and subsequent crisis—sees problems with Musk, a self-described “free-speech absolutist,” controlling a major social media platform.

“I think, honestly, the guy is a bit of a lunatic when it comes to these views of what is the proper, I would say, regulation of these platforms,” Roubini said in an interview with Fortune. “There are certain things that are inappropriate, and the current Twitter does its best, not perfectly, to try to limit that stuff.”

Musk, in a note to advertisers on Oct. 27—the day his $44 billion takeover deal was finalized—said he bought Twitter to “try to help humanity, whom I love,” and that he wants to turn the platform into a “common digital town square, where a wide range of beliefs can be debated in a healthy manner, without resorting to violence.”

But Twitter experienced a surge in racial slurs and anti-Semitic remarks immediately after Musk took over, drawing complaints from NBA star LeBron James and other big names.

Roubini noted that “lots of anti-Semites, racists, white supremacists, and neo-Nazis are spreading misinformation that is fed by Russia, China, Iran, North Korea to try to destroy our liberal democracy. And now [Musk is] saying, ‘I want all these people to be allowed to have a platform and spread this misinformation.’”

Musk, in response to a surge in N-word usage on Twitter after his takeover, pointed to an employee’s explanation that “nearly all of these accounts are inauthentic. We’ve taken action to ban the users involved in this trolling campaign—and are going to continue working to address this in the days to come to make Twitter safe and welcoming for everyone.”

He also rushed to reassure advertisers, in his Oct. 27 note, that Twitter would not become a “free-for-all hellscape,” adding the company would form a content moderation council “with widely diverse viewpoints” and that “no major content decisions or account reinstatements will happen before that council convenes.” 

But some major companies are already leaving the platform. General Motors, Pfizer, Volkswagen, and others have paused their advertising on the social network.

While insisting that Twitter won’t become a place “where anything can be said with no consequences,” Musk has indicated that users will have more leeway to say what they want, however offensive or misleading, than they had before. And he heavily criticized the company’s former leaders—some of whom he immediately fired—for being too strict and suppressive. 

“By ‘free speech,’ I simply mean that which matches the law,” he tweeted in April. “I am against censorship that goes far beyond the law.”

Roubini said that Musk would, in principle, allow “any form [of] ‘free speech.’” But, he said, “there’s a limit even to free speech. You’re not allowed to go dressed as a Ku Klux Klan member in front of a Jewish or an African American family with guns threatening them. That’s not acceptable. You want to demonstrate, you’re allowed to demonstrate, but not to physically threaten other individuals in our society. So free speech is not unlimited.”

Shortly after taking over the platform, Musk dropped hints about how he might address the risk of Twitter becoming too toxic for some users. He suggested that, just as moviegoers use maturity ratings to decide which films to watch, Twitter users could pick their own levels of content moderation. 

Central to the issue is Section 230 of the Communications Decency Act (passed in 1996), which shields internet platforms from liability for content posted by third parties. 

Roubini takes issue with the legislation and believes it needs to be modified.

“We will have to rethink that particular section in a way that doesn’t eliminate some of the protection,” he said, “but doesn’t give protection to the extent that we have today, which essentially allows any type of…disgusting, criminal, violent, threatening messages to be broadcast. That’s not acceptable in any civil society, period.”

Twitter did not return Fortune’s request for comment.

Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.