There’s been a lot of attention focused over the past year on the rise of so-called “fake news,” a term that has even made its way into tweets by President Trump. But the problem has proven to be difficult to define, let alone solve.
What exactly qualifies as “fake news?” A story about secret child sexual abuse rings operating underneath a pizza parlor? A Breitbart News item that suggests billionaire George Soros pays anti-Trump protesters? Or a New York Times report that says something the president doesn’t want people to believe? All of these have been defined as fake news.
After initially dismissing the suggestion that it plays a role in the spread of hoaxes and inaccurate information, Facebook has implemented a number of features designed to address the issue, including flagging stories as unverified.
But will this actually correct the overall problem? Social media researcher danah boyd (who chooses to spell her name using only lowercase letters) argues in a recent essay that it won’t. And the reasons for that have a lot less to do with Facebook and a lot more to do with human nature.
Get Data Sheet, Fortune’s technology newsletter.
For one thing, boyd says, no one—not even experts in the area—can agree on a definition of what “fake news” is. The term is used to refer to every conceivable kind of problematic content, “including both blatantly and accidentally inaccurate information, salacious and fear-mongering headlines, hateful and incendiary rhetoric produced in blogs, and propaganda of all stripes.”
The worst kinds of false news aren’t even the most obvious kinds—the clear fakes or ridiculous assertions—the Microsoft researcher and former fellow at Harvard’s Berkman Center says:
There are definitely some “low-hanging fruit mechanisms” that platforms like Facebook and Google can use, boyd says, including cutting off the economic incentive for fake news by blocking certain sites from ad networks. But at the end of the day, she writes, “these are rounding errors in the ecosystem.”
“I don’t want to let companies off the hook, because they do have a responsibility in this ecosystem,” boyd says. “But they’re not going to produce the silver bullet that they’re being asked to produce. And I think that most critics of these companies are really naive if they think that this is an easy problem for them to fix.”
In boyd’s view, the problem we think we are describing when we use the term “fake news” can’t be solved by blaming social-media platforms, or digital journalism and the rise of clickbait, or Macedonian teens. All of these things play a role, but they are just symptoms of a deeper issue.
“No amount of ‘fixing’ Facebook or Google will address the underlying factors shaping the culture and information wars in which America is currently enmeshed,” she says.
In effect, the current obsession with fake news is just the latest version of a fight that the Internet has been waging for years against offensive content, whether it’s email spam or online bullying or black hat SEO (search engine optimization) techniques, says boyd.
If Facebook and Google crack down on “fake news” sites, the Microsoft researcher argues, those who have an interest in generating that kind of content will find ways around the restrictions, as many already have by using visual “memes” instead of links to news stories.
The issues that we need to deal with “are socially and culturally hard,” says boyd. They force us to confront “how people construct knowledge and ideas, communicate with others and construct a society. They are also deeply messy, revealing divisions and fractures in beliefs and attitudes. And that means that they are not technically easy to build or implement.”
In other words, Facebook and Google can make it harder to share (or make money from) false news reports or propaganda. But they can’t solve the underlying problems because those problems are fundamentally human ones. We are going to have to figure out how to fix those ourselves. And that is a much harder assignment than just changing some features on a social platform.