The biggest social media, video hosting, and bulletin-board sites on the internet are taking unprecedented steps to control what users can post and see. The moves have come in response to mounting backlash against everything from fake news to Russian manipulation to plain old offensive content, and anxiety among the advertisers on which giants like Google depend. The tightening controls point to a much different future for the web – one that might look, in some ways, like the past.
The latest sign of the trend towards tighter control is a report that YouTube plans to release a new version of its YouTube Kids app that will rely entirely on human curation. The current version of YouTube Kids, which relies primarily on algorithmic suggestions, has been found to occasionally display violent and disturbing videos.
That shift would likely require more staff at Google-owned YouTube, cutting into the profits enabled by more automated forms of content curation. Facebook has been hiring for similar reasons, announcing the addition of 3,000 video content moderators last May, and another 1,000 ad reviewers in October. For a company with just 25,105 employees as of last December, those are huge numbers, and suggest that the push for tighter control could erode digital companies’ bottom lines.
But failing to grab the reins could have an even worse impact in the long run. Facebook’s recent privacy woes have led some advertisers to “pause” their campaigns there, according to COO Sheryl Sandberg. YouTube is more directly exposed, as shown by at least two instances last year when advertisers fled in response to the discovery of disturbing content.
YouTube has taken controversial steps to limit the risks to brands, including making smaller channels ineligible for advertising. In February, it issued a strong warning in the form of a takedown and ‘strike’ against conspiracy theorist Alex Jones, who has grown his huge audience largely through the platform. In perhaps its most dramatic move, it heavily censured one of its biggest homegrown stars, Logan Paul, after he posted videos that many saw as offensive.
Get Data Sheet, Fortune’s technology newsletter.
Slightly smaller sites have taken similar steps. Twitter, whose leadership long upheld an absolute defense of free speech, has accelerated its suspensions of users who spread hateful content or promote violence. Reddit, which long hosted virulent content from overt racism to borderline child pornography, appears to have become much more comfortable with shutting down message boards dedicated to hate or other dangerous speech. Last October, it altered its policies to ban incitements to violence, and shut down 10 white supremacist boards. Most recently, the site shut down a board tracking the bizarre “Q Anon” conspiracy theory.
The impact of this ongoing narrowing of acceptable speech on major sites is still hard to evaluate, but it’s likely to be substantial. New restrictions often provoke intense backlash from affected groups, often including claims that freedom of speech is being eroded. Those claims are legally groundless: America’s First Amendment prevents the government from restraining speech, but doesn’t require that a private publisher give any particular piece of speech a platform — and doesn’t protect direct incitements to violence in any form.
But the outcries do point to the reality that some users will abandon platforms that restrict ideas they support. That suggests the internet could return to some semblance of its fragmented early days, with more extreme speech confined to smaller platforms like the anonymous 4chan, and more recent upstarts such as the Reddit-like Voat and the Twitter clone Gab, both of which cater more or less explicitly to the far right.
Those extreme political factions saw a massive upsurge in recent years, fueled in part by the ease of organizing online. It’s not yet entirely clear whether pushing extreme or offensive content off of mainstream sites will actually reduce their influence, but one recent academic study did find that when Reddit booted its worst offenders, users that remained used drastically less hate speech.
And while tighter control often requires more staff, there is also a business upside. Twitter in particular has struggled with user growth for years, which CEO Jack Dorsey himself has linked to the prevalence of abuse on the site. So if these cleanup efforts are effective in making the site more polite, they could be good for both society, and the bottom line.