If you spend any time on Facebook then you’ve probably seen them, either in your main news feed or in the “trending topics” section—clearly fake news stories, many focused on the latest conspiracy theory about the 2016 election.
These are the kind of stories that Facebook’s editors used to weed out, but then the company fired them all after a controversy over allegations of political bias. Since then, the site has used algorithms to choose what shows up and when.
According to a recent experiment by the Washington Post, however, eliminating the human beings isn’t working that well—at least not at keeping out the fakes.
In order to see what Facebook considered a trending topic and why, the newspaper’s Intersect team started tracking what stories and links were trending every hour, and kept a record of them in a database.
Get Data Sheet, Fortune’s technology newsletter.
Not long after Facebook switched from using human editors to mostly algorithm-driven curation, the site suffered a black eye when a fake story about Fox News firing host Megyn Kelly started trending.
As if that wasn’t bad enough, the social network then highlighted a story from a 9/11 hoax website in the trending-topics section, which stated the collapse of the World Trade Center buildings was a result of “controlled explosions” rather than a terrorist attack.
After the Megyn Kelly incident, Facebook apologized, saying it met the criteria for a trending story but should have been flagged as fake.
“We’re working to make our detection of hoax and satirical stories quicker and more accurate,” a spokesman told CBS News at the time.
According to the Post‘s survey of topics, however, there is still a lot of work that needs to be done. Between Aug. 31 and Sept. 22, the paper says it found five trending stories that were “indisputably fake” and three that were “profoundly inaccurate.”
“I’m not at all surprised how many fake stories have trended,” a former member of the Facebook trending-topics team told the newspaper. “It was beyond predictable by anyone who spent time with the actual functionality of the product, not just the code.” In at least one case, a story trended that came from a site with the word “Fakingnews” in the domain name.
Facebook has made a point of denying that it is a media company, and its response to the Trending Topics controversy — getting rid of its human editors — can be seen in part as a desire to reject the media-entity status that some believe it deserves.
Regardless of what it calls itself, however, the reality is that Facebook is doing its best to host and distribute an increasing amount of news from companies (including the Washington Post) through features like Instant Articles, which hosts fast-loading versions of mobile stories for mainstream news sites.
And a growing number of users say they get their news from the social network, particularly millennials.
In that kind of news-consumption environment, Facebook arguably bears a responsibility to ensure that the news it is providing is accurate. But so far it seems to be failing.
The company claims that it cares about the fake news problem, but if it’s not a media outlet then why should it? If a story is being shared a lot and clicked on a lot or is generating a lot of comments, what difference does it make to Facebook whether it’s fake or not?
Facebook is taking on Craigslist with this new feature. Watch:
The risk of that kind of approach, especially during an election like the current one in the United States, is that fake news designed to drive a specific agenda—pro-Donald Trump or anti-Hillary Clinton, for example—can reach and influence a far greater number of people if it hits the trending topics than it might have otherwise.
Many of these stories come from little-known websites that generate stories solely for the click traffic they know Facebook will send them even if something is a hoax.
In a campaign in which the Republican candidate already has a somewhat fragile relationship with the truth, that could have significant repercussions, and Facebook should arguably be owning up to those responsibilities instead of trying to downplay them.
And maybe rather than pretending that the algorithm can do everything, the social network should try hiring some human editors again to help it spot a fake more quickly.