But how will it tell friend from foe?
Not that long ago, Facebook CEO Mark Zuckerberg refused to admit the social network needed to worry about the rise of “fake news,” saying the suggestion that it might have influenced the U.S. election was “a pretty crazy idea.”
How times have changed.
On Thursday, Facebook released a report that effectively admits the social network has been used by both governments and non-state agents as part of a series of orchestrated attempts to manipulate public opinion about political issues, including the U.S. election. Some of this has taken the form of artificially created “fake news” stories, as well as organized efforts to promote these stories and get them circulating as widely as possible.
Get Data Sheet, Fortune’s technology newsletter.
In particular, the Facebook security team says it found evidence of a coordinated effort using fake Facebook accounts to spread a variety of information related to the U.S. election, including reports based on emails that were stolen from Democratic Party headquarters.
After the fake accounts promoted these stories, “organic proliferation of the messaging and data through authentic peer groups and networks was inevitable,” Facebook said.
U.S intelligence sources have tied this kind of activity to Russian agents acting on behalf of the government and other state entities in an attempt to influence the election, and Facebook’s report said that the security team’s information “does not contradict” this conclusion.
Facebook’s security team says as a result of its research, it has expanded its focus away from traditional forms of abuse such as spam or malware, and will now pay attention to “more subtle and insidious forms of misuse, including attempts to manipulate civic discourse.”
So now that the giant social network has admitted that this kind of behavior is a problem, what does it plan to do about it? That’s where the hard part comes in.
The company says that it will suspend or delete accounts that are trying to engage in this kind of activity, after it identifies them using a combination of machine learning and the kind of threat analysis that many intelligence agencies use. The company said it has already shut down more than 30,000 fake accounts in advance of the French elections.
But how will Facebook know for sure that the accounts it is targeting are actual malicious agents or affiliated with government entities, as opposed to just being normal Facebook users who happen to be sharing fake news or racist propaganda? That’s not clear.
The company described several types of behavior that it said were associated with these campaigns, including sending out friend requests using spoofed accounts with real names. These may be followed up with malware links, or used to map the networks of users who were seen as vulnerable to future hacking or social-engineering attempts.
Other techniques include the use of coordinated “likes” coming from multiple fake accounts, in order to boost the visibility of a fake news story, as well as the creation and use of groups that spread propaganda mixed in with legitimate news stories.
“The inauthentic nature of these social interactions obscures and impairs the space Facebook and other platforms aim to create for people to connect and communicate with one another,” the report says. “In the long-term, these inauthentic networks and accounts may drown out valid stories and even deter some people from engaging at all.”
As with many of the other things Facebook does, the primary intent is to maintain what it sees as “authentic” forms of interaction. But finding the line between authentic and inauthentic may not be as easy as the company thinks.
For example, some of the behavior described by the security team wasn’t aimed at a specific political view. “We identified malicious actors on Facebook who, via inauthentic accounts, actively engaged across the political spectrum,” the report says, “with the apparent intent of increasing tensions between supporters of these groups and fracturing their supportive base.”
Flagging fake or spoofed accounts is one thing. But how do you differentiate between someone who is intentionally sowing political discord and someone who is just sharing fake news because it reinforces their existing biases? That’s the world Facebook finds itself in now.