In the wake of the surprising-to-many election of Donald Trump, Facebook has emerged in some circles as scapegoat of choice. A chorus of voices, saying that false news stories shared on the platform helped sway the election, are calling on CEO Mark Zuckerberg to take drastic action to control what’s shared on the site. And he’s responding with some concrete pledges.

The unfolding public relations headache presents several competing imperatives. Facebook broadly prefers to be seen as a neutral platform for social sharing. The shift towards editorial oversight involves serious technical challenges, as well as some uncomfortable regulatory implications.

But Facebook faces a third conundrum—getting rid of fake news might actually hurt its ad revenue. Interacting with news that confirms pre-existing beliefs triggers positive emotional responses in users and drives huge user engagement. News is a major part of what people interact with Facebook, and whether a piece of content is true or not has no clear correlation to how many clicks it generates.

As media expert Clay Shirky put it to The Guardian, “We love bedtime stories.” Even, perhaps especially, when they’re fairy tales.

Filtering fake news would also have second-order effects that could not only reduce daily engagement, but drive users away from Facebook altogether, at least as a news source. While Trump supporters have been particularly skeptical of large news organizations and traditional fact checking, more than half of Clinton backers feel the same way. We’re a nation in the late stages of democracy, with a populace deeply distrustful of any and all authority and expertise. Social media’s dominance is both a cause and effect of that. We spend time on Facebook because we think our friends are better sources of information than pinhead professors or noodling newspaper editors.

If Facebook were to be perceived as having become just another top-down entity telling people what to believe, it would endanger a core element of its appeal. There would be a significant opening for a competing social-news platform that positioned itself as more open and unfiltered.

That said, there are counterpoint business arguments in favor of filtering fake news. Facebook has avoided many of the concerns about trolling and harassment that have been such a drag on Twitter’s prospects, in part because of evolving features that allow users to filter and report content and tailor their ad preferences. However, toxicity fueled by partisan fake news could eventually generate headwinds for Facebook similar to Twitter’s woes.

In a piece today from NPR, more than 150 respondents to an unscientific survey described reducing their Facebook use because of the intensity of the election environment. One, a Mormon supporter of independent conservative candidate Evan McMullen, described himself becoming “a meaner, more cynical person” thanks to Facebook, and limiting himself to an hour a week on the site. That’s a lot less than the nearly hour per day Facebook now gets from its average user.

Of course, activists and pundits will protest that this is about more than business, that the fate of American political discourse is at stake. Maybe regulators will eventually agree, and start treating Facebook as a publisher (which would present a different set of huge business challenges). But those hoping Facebook cleans up its act do themselves no favors by pretending the bottom line here isn’t denominated, in large part, in dollars.