Great ResignationClimate ChangeLeadershipInflationUkraine Invasion

To protect American innovation, we must let websites keep moderating their own content

September 17, 2021, 10:00 AM UTC
Clear content moderation rules help users feel safe online.
Eric Lafforgue/Art in All of Us—Getty Images

By pushing antitrust policies that will blunt America’s competitive edge, our lawmakers are mistakenly also making the internet less safe for us all.

This summer, the House Judiciary’s Subcommittee on Antitrust marked up and pushed through a flurry of bills that were designed to overhaul American competition policy, and more specifically, the consumer welfare standard. Our lawmakers were trying to thwart alleged gatekeeping by some of the most popular American companies, but in doing so, they forgot about the impact this would have on our digital free speech and online safety.

They also don’t seem to realize that they’re fighting against themselves. When these very lawmakers talk about Section 230—a law that when combined with the First Amendment helps promote local news and views across the country—they want more regulation to protect Americans from harmful speech online. Yet when they turn around to talk about antitrust regulation, they don’t want any regulation of content at all. That leads to a confusing and burdensome regulatory regime that will hurt businesses and the public’s free speech online.

In fact, one of the bills in question from the House Judiciary antitrust package will make content moderation decisions potential antitrust violations. This bill, the American Choice and Innovation Online Act, could force websites to either host Neo-Nazi leaders alongside activists fighting for equal rights and social change, take down harmful content but risk committing an antitrust violation for not treating bad actors the same as typical users, or eliminate user-generated content altogether.

The bill will make it harder for Americans to feel safe on websites and apps they use every day. Not only is this an overreach of antitrust regulation, but it is also a drastic content moderation decision that will turn our internet into either a cesspool we won’t want to wade through or a glorified media network where we the public can’t influence the conversation.

Similar to the common carriage proposals gaining traction, this bill takes the principle of nondiscrimination and uses it to prevent companies from disadvantaging their competitors. However, the bill’s principle of nondiscrimination also makes it impossible for online services of all kinds to restrict user-generated content. It renders community guidelines written to protect vulnerable people useless.

Nondiscrimination from an antitrust perspective sounds simple enough. Don’t discriminate between your own products and services and that of your competitors. However, banning discrimination from a content moderation perspective ruins the precarious balance between online safety and free speech. It means that the average user who uses social media is no different and cannot be treated differently from bad actors whose content threatens public health, national security, or the safety of vulnerable people.

As desirable as it may be, a potential compromise that balances content moderation and antitrust concerns doesn’t feel possible. The fact that Congress and academia have not developed a workable “middle ground” suggests it really is a trade-off: platforms can curate content as they see fit, or Congress can try to encourage them to curate in a certain way—which then risks judicial intervention under the First Amendment. In this sense, efforts for nondiscrimination on either front will inevitably open up websites to the threat of litigation on the other. 

This leaves social media companies with two options: carry all users and user-generated content and inch the internet towards the worst humanity can offer or take away the features that allowed user-generated content to thrive in the first place. Given that horrible content is simply bad for business, most companies will turn towards structured content. A morning scroll through Instagram won’t just be cute dogs and updates from family and friends; it could easily become the new fabricated and inauthentic way to flip through channels full of corporate-sponsored, pre-vetted content. This is not an internet the #MeToo movement or Black Lives Matter can thrive or even survive on.

Clear content moderation rules help Americans feel safe online. What parents, marginalized communities, and entrepreneurs don’t want is to have to endure harassment and abuse in order to seek support, resources, and opportunities. 

Social media relies on contextualizing different types of content, removing bad actors, and promoting useful information to its users. Content moderation is at the core of that goal, ensuring websites can balance free expression and online safety to maximize both. Otherwise, our friends and neighbors would have to wade through expletives, violence, and sexual content just to connect with their communities.

While some websites thrive on shock value and causing offense, most succeed because their users do not want to face the onslaught of horrible things humanity is capable of. What is going to happen to the internet when the protections allowing websites to govern their own content go away? I don’t know about you, but if our digital ecosystem devolves into a space where I can’t share life updates easily or becomes a cesspool of horrible content, I might just unplug. 

Kir Nuthi is an associate contributor at Young Voices writing on issues related to digital free speech and free enterprise.

Subscribe to Fortune Daily to get essential business stories straight to your inbox each morning.