Facebook’s Relationship With Artificial Intelligence and Fake News: It’s Complicated
Facebook wants the public to know more about artificial intelligence as part of an effort to make the often-misunderstood technology less mysterious (and threatening) than it may currently seem.
The social networking giant is debuting a public relations campaign on Thursday involving videos and literature designed to educate people about specific artificial intelligence technologies like deep learning, which companies like Facebook and Google have used to train computers to recognize objects in pictures.
The goal is to “demystify” AI to the general public and “to tell people this is not magic, this is not Terminator either,” Facebook AI research chief Yann LeCun said at a recent press event. Facebook also hopes to spur interest in the technology so younger generations will be attracted to the field.
“AI is going to affect our lives and it’s going to affect technology and it’s very important to have some idea of how it works and what it can do,” LeCun said.
Get Data Sheet, Fortune’s technology newsletter.
The timing of the AI campaign comes after critics have slammed Facebook (FB) for its role in distributing false information during the 2016 presidential campaign. Those critics have suggested that Facebook could have prevented the spread of fake news on its service in prelude to Donald Trump’s presidential victory.
Made-up news—like a recent story that claimed the Pope had endorsed Trump—was seen by some political analysts as contributing to the real estate tycoon’s win.
Additionally, critics have said Facebook’s mostly algorithm-controlled newsfeed creates so-called filter bubbles. Detractors say the social network’s algorithms—tailored to send users news and information they would like, based on their beliefs and habits—can lead to a less-informed society in which counter-opinions or ideas are easily ignored—if they’re seen at all.
The claims come after Facebook CEO Mark Zuckerburg made artificial intelligence one of the key technologies to improve the social network. During Facebook’s annual developer conference in April, for example, the CEO said the company wanted to eventually incorporate its deep-learning powered image-recognition technology into its newsfeed. Though Zuckerberg did not elaborate fully, the general idea was that the technology could let Facebook show users news stories containing photos that correspond with a person’s unique taste.
Facebook Could Have Influenced the Election With Fake News. Watch:
As a campaign blog post details, artificial intelligence technologies like machine learning are routinely found in common services like Apple’s Siri digital assistant. Apple (AAPL), for example, improved Siri’s ability to recognize human voices by incorporating so-called neural nets, essentially software systems built to loosely simulate the way the human brain learns, according to a report by tech publication Backchannel.
The advent of deep learning and associated AI techniques also highlights the potential challenges companies face when using the powerful technologies in their services. Rather than an alarmist view of AI—that computers could eventually gain human-like consciousness and wreak havoc on humanity—the rise of the technology poses much more complex and subtle dilemmas like how it should or shouldn’t be used to fight fake news and its impact on society.
Companies like Facebook, Google (GOOG) , Amazon (AMZN), and IBM (IBM), have been poaching top AI scholars and putting them in corporate research positions where they are allowed to publish academic papers and refine their studies. As these businesses gain some of the biggest minds in the field, some technologists have expressed concern that they have the potential to steer AI research in ways that only benefit their bottom line.
As LeCun explained, Facebook’s AI research team now has 75 to 80 people spread out in places like Silicon Valley, New York, Paris, Seattle, and Israel. Additionally, Facebook also has an AI team whose job is to incorporate its various AI technologies into its core products. That team, led by Joaquin Candela, has roughly 140 members.
With so much available AI talent, some technology analysts and media observers have claimed that Facebook could have created ways to filter the false information from spreading during the presidential campaign, similar to how the company screens out spam content.
During the press event, LeCun explained that while his research team is making strides in improving AI systems, it is ultimately up to Facebook’s product team to choose whether to incorporate them into various services.
Like many tech companies, one way Facebook divides its workforce is by splitting its core technologists or engineers from those in charge of creating usable products out of the infrastructure the technologists build. The two departments work closely with each other, but each team member has a clearly defined role. But while that division may be common to Silicon Valley workers, the general public that routinely interacts with Facebook’s services may not see a distinction.
LeCun did not want to specifically address the newsfeed controversy and the role of AI tech in it, but he did say that Facebook “probably” has some technology that it could use to alleviate some of the fake news problem.
During another talk later in the week at Carnegie Mellon University, when an attendee asked about fake news, LeCun elaborated on a possible AI-powered filter. “I think, frankly, the problem is not completely solvable but, you know, there’s probably things that can be done automatically using machine learning techniques, deep learning, natural language, and understanding methods,” he said. “Some of that has already been done in prototypes.”
LeCun then said that the dilemma is “a question of policy and product design, and that’s completely outside of my territory—I’m a researcher.”
Ultimately, the choice of whether to use AI to combat the rise of fake news will be up to Zuckerburg and those in charge of Facebook’s products. So far, Facebook has said little publicly beyond his recent comments that Facebook has made “progress, and we will continue to work on this to improve further.”
Additionally, as news site Quartz recently reported, some prominent AI experts like Salesforce research head Richard Socher, have questioned whether its possible for AI technologists to build workable, large-scale systems that could screen for lies or deceptive news.
College students recently showed off an algorithm that could help filter fake news, but it’s unlikely to be as sophisticated as those dreamed by AI researchers. The student’s algorithm takes in account a “website’s reputation” to judge whether a story is false.
Facebook, in contrast, would likely want to avoid deciding which websites and news organizations are trustworthy. It would open the door to accusations that the company censors news it doesn’t like and further erodes its argument that it is merely a media bystander rather than a media outlet.