• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechSocial Media

Social media platforms like Twitter and Facebook have been skating by on an obscure federal law for years. 2 experts see a huge change coming

By
Robert Kozinets
Robert Kozinets
,
Jon Pfeiffer
Jon Pfeiffer
and
The Conversation
The Conversation
Down Arrow Button Icon
By
Robert Kozinets
Robert Kozinets
,
Jon Pfeiffer
Jon Pfeiffer
and
The Conversation
The Conversation
Down Arrow Button Icon
January 4, 2023, 1:17 PM ET
Elon Musk
Will legislation hinder Elon Musk's plans for Twitter?Suzanne Cordeiro—AFP/Getty Images

One of Elon Musk’s stated reasons for purchasing Twitter was to use the social media platform to defend the right to free speech. The ability to defend that right, or to abuse it, lies in a specific piece of legislation passed in 1996, at the pre-dawn of the modern age of social media.

The legislation, Section 230 of the Communications Decency Act, gives social media platforms some truly astounding protections under American law. Section 230 has also been called the most important 26 words in tech: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

But the more that platforms like Twitter test the limits of their protection, the more American politicians on both sides of the aisle have been motivated to modify or repeal Section 230. As a social media media professor and a social media lawyer with a long history in this field, we think change in Section 230 is coming—and we believe that it is long overdue.

Born of porn

Section 230 had its origins in the attempt to regulate online porn. One way to think of it is as a kind of “restaurant graffiti” law. If someone draws offensive graffiti, or exposes someone else’s private information and secret life, in the bathroom stall of a restaurant, the restaurant owner can’t be held responsible for it. There are no consequences for the owner. Roughly speaking, Section 230 extends the same lack of responsibility to the Yelps and YouTubes of the world.

But in a world where social media platforms stand to monetize and profit from the graffiti on their digital walls—which contains not just porn but also misinformation and hate speech—the absolutist stance that they have total protection and total legal “immunity” is untenable.

A lot of good has come from Section 230. But the history of social media also makes it clear that it is far from perfect at balancing corporate profit with civic responsibility.

We were curious about how current thinking in legal circles and digital research could give a clearer picture about how Section 230 might realistically be modified or replaced, and what the consequences might be. We envision three possible scenarios to amend Section 230, which we call verification triggers, transparent liability caps and Twitter court.

Verification triggers

We support free speech, and we believe that everyone should have a right to share information. When people who oppose vaccines share their concerns about the rapid development of RNA-based COVID-19 vaccines, for example, they open up a space for meaningful conversation and dialogue. They have a right to share such concerns, and others have a right to counter them.

What we call a “verification trigger” should kick in when the platform begins to monetize content related to misinformation. Most platforms try to detect misinformation, and many label, moderate or remove some of it. But many monetize it as well through algorithms that promote popular—and often extreme or controversial—content. When a company monetizes content with misinformation, false claims, extremism or hate speech, it is not like the innocent owner of the bathroom wall. It is more like an artist who photographs the graffiti and then sells it at an art show.

Twitter began selling verification check marks for user accounts in November 2022. By verifying a user account is a real person or company and charging for it, Twitter is both vouching for it and monetizing that connection. Reaching a certain dollar value from questionable content should trigger the ability to sue Twitter, or any platform, in court. Once a platform begins earning money from users and content, including verification, it steps outside the bounds of Section 230 and into the bright light of responsibility—and into the world of tort, defamation and privacy rights laws.

Transparent caps

Social media platforms currently make their own rules about hate speech and misinformation. They also keep secret a lot of information about how much money the platform makes off of content, like a given tweet. This makes what isn’t allowed and what is valued opaque.

One sensible change to Section 230 would be to expand its 26 words to clearly spell out what is expected of social media platforms. The added language would specify what constitutes misinformation, how social media platforms need to act, and the limits on how they can profit from it. We acknowledge that this definition isn’t easy, that it’s dynamic, and that researchers and companies are already struggling with it.

But government can raise the bar by setting some coherent standards. If a company can show that it’s met those standards, the amount of liability it has could be limited. It wouldn’t have complete protection as it does now. But it would have a lot more transparency and public responsibility. We call this a “transparent liability cap.”

Twitter court

Our final proposed amendment to Section 230 already exists in a rudimentary form. Like Facebook and other social platforms, Twitter has content moderation panels that determine standards for users on the platform, and thus standards for the public that shares and is exposed to content through the platform. You can think of this as “Twitter court.” Effective content moderation involves the difficult balance of restricting harmful content while preserving free speech.

Though Twitter’s content moderation appears to be suffering from changes and staff reductions at the company, we believe that panels are a good idea. But keeping panels hidden behind the closed doors of profit-making companies is not. If companies like Twitter want to be more transparent, we believe that should also extend to their own inner operations and deliberations.

We envision extending the jurisdiction of “Twitter court” to neutral arbitrators who would adjudicate claims involving individuals, public officials, private companies and the platform. Rather than going to actual court for cases of defamation or privacy violation, Twitter court would suffice under many conditions. Again, this is a way to pull back some of Section 230’s absolutist protections without removing them entirely.

How would it work—and would it work?

Since 2018, platforms have had limited Section 230 protection in cases of sex trafficking. A recent academic proposal suggests extending these limitations to incitement to violence, hate speech and disinformation. House Republicans have also suggested a number of Section 230 carve-outs, including those for content relating to terrorism, child exploitation or cyberbullying.

Our three ideas of verification triggers, transparent liability caps and Twitter court may be an easy place to start the reform. They could be implemented individually, but they would have even greater authority if they were implemented together. The increased clarity of transparent verification triggers and transparent liability would help set meaningful standards balancing public benefit with corporate responsibility in a way that self-regulation has not been able to achieve. Twitter court would provide a real option for people to arbitrate rather than to simply watch misinformation and hate speech bloom and platforms profit from it.

Adding a few meaningful options and amendments to Section 230 will be difficult because defining hate speech and misinformation in context, and setting limits and measures for monetization of context, will not be easy. But we believe these definitions and measures are achievable and worthwhile. Once enacted, these strategies promise to make online discourse stronger and platforms fairer.

Robert Kozinets, Professor of Journalism, USC Annenberg School for Communication and Journalism, and Jon Pfeiffer, Adjunct Professor of Law, Pepperdine University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Our new weekly Impact Report newsletter examines how ESG news and trends are shaping the roles and responsibilities of today's executives. Subscribe here.

About the Authors
By Robert Kozinets
See full bioRight Arrow Button Icon
By Jon Pfeiffer
See full bioRight Arrow Button Icon
By The Conversation
See full bioRight Arrow Button Icon

Latest in Tech

LawInternet
A Supreme Court decision could put your internet access at risk. Here’s who could be affected
By Dave Lozo and Morning BrewDecember 2, 2025
7 hours ago
AITikTok
China’s ByteDance could be forced to sell TikTok U.S., but its quiet lead in AI will help it survive—and maybe even thrive
By Nicholas GordonDecember 2, 2025
7 hours ago
United Nations
AIUnited Nations
UN warns about AI becoming another ‘Great Divergence’ between rich and poor countries like the Industrial Revolution
By Elaine Kurtenbach and The Associated PressDecember 2, 2025
9 hours ago
Anthropic cofounder and CEO Dario Amodei
AIEye on AI
How Anthropic’s safety first approach won over big business—and how its own engineers are using its Claude AI
By Jeremy KahnDecember 2, 2025
9 hours ago
Nvidia founder and CEO Jensen Huang reacts during a press conference at the Asia-Pacific Economic Cooperation (APEC) CEO Summit in Gyeongju on October 31, 2025.
AINvidia
Nvidia CFO admits the $100 billion OpenAI megadeal ‘still’ isn’t signed—two months after it helped fuel an AI rally
By Eva RoytburgDecember 2, 2025
11 hours ago
Big TechInstagram
Instagram CEO calls staff back to the office 5 days a week to build a ‘winning culture’—while canceling every recurring meeting
By Marco Quiroz-GutierrezDecember 2, 2025
11 hours ago

Most Popular

placeholder alt text
Economy
Ford workers told their CEO 'none of the young people want to work here.' So Jim Farley took a page out of the founder's playbook
By Sasha RogelbergNovember 28, 2025
4 days ago
placeholder alt text
Success
Warren Buffett used to give his family $10,000 each at Christmas—but when he saw how fast they were spending it, he started buying them shares instead
By Eleanor PringleDecember 2, 2025
17 hours ago
placeholder alt text
Economy
Elon Musk says he warned Trump against tariffs, which U.S. manufacturers blame for a turn to more offshoring and diminishing American factory jobs
By Sasha RogelbergDecember 2, 2025
11 hours ago
placeholder alt text
C-Suite
MacKenzie Scott's $19 billion donations have turned philanthropy on its head—why her style of giving actually works
By Sydney LakeDecember 2, 2025
18 hours ago
placeholder alt text
Success
Forget the four-day workweek, Elon Musk predicts you won't have to work at all in ‘less than 20 years'
By Jessica CoacciDecember 1, 2025
1 day ago
placeholder alt text
AI
More than 1,000 Amazon employees sign open letter warning the company's AI 'will do staggering damage to democracy, our jobs, and the earth’
By Nino PaoliDecember 2, 2025
19 hours ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.