Social Media’s Plan to Fight Online Trolls Gets a Reality Check: Eye on A.I.

July 30, 2019, 2:38 PM UTC

Facebook, Twitter, and YouTube executives say artificial intelligence is the solution to policing the deluge of offensive content that users post on their services. The technology could quickly scan all incoming content, the executives argue, and remove what it considers to be hateful, harassing, and violent.

But the reality is likely more modest. Although machine learning can help to automatically scrub posts, it’s unlikely to entirely solve the problem, said Tim Winchcomb, the head of technology strategy for Cambridge Consultants.

It’s a topic he’s given a lot of thought to. His firm recently published a study on behalf of the United Kingdom’s Office of Communications, or Ofcom, about the use of A.I. for content moderation. The U.K. government is considering regulating tech companies that distribute online content, and Ofcom sought the information to help “inform the wider debate on online harms,” Winchcomb said.

It’s a “common perception that A.I. will solve all of our problems,” he told Fortune. But even today’s most sophisticated A.I. systems aren’t up to the job.

A.I. simply isn’t smart enough to understand the nuances of language. For instance, current natural language processing technology, a type of A.I. that helps computers understand language, struggles to understand sarcasm or differentiate between a joke that may be funny to someone but hurtful to another.

Additionally, A.I. has problems understanding cultural differences, impacting how successful it can be when used overseas. The Cambridge study cites a 2006 YouTube video that the Thai government said was offensive because it showed Thailand’s king with someone else’s feet on his head.

Where A.I. can greatly help is in assisting human content moderators rather than replacing them. For example, technology could identify inappropriate sections in videos so that content moderators don’t have to waste their time watching entire clips.

A number of recent media reports have chronicled the psychological toll on content moderators who police user posts on major social media services. A recent Washington Post article described how one Twitter content moderator in the Philippines “had no ability to blur or minimize the images, which are about the size of a postcard, or to toggle to a different screen for a mental breather.”

A.I. may not be the answer to removing all of the Internet’s filth. But it could help the human moderators who are tasked with that mammoth job.

Jonathan Vanian
@JonathanVanian
jonathan.vanian@fortune.com

Sign up for Eye on A.I.

Story updated July 31 at 10:30 AM PT to emphasize that the UK government is considering regulation.

EYE ON A.I. NEWS

About those robotaxis…General Motors and its Cruise self-driving automobile unit have delayed plans to debut robotaxis in 2019, Cruise CEO Dan Ammann said in a Medium post. Ammann wrote, “In order to reach the level of performance and safety validation required to deploy a fully driverless service in San Francisco, we will be significantly increasing our testing and validation miles over the balance of this year, which has the effect of carrying the timing of fully driverless deployment beyond the end of the year.”

New York state to probe A.I. New York Governor Andrew Cuomo created a temporary state commission tasked with examining A.I.’s pros and cons and how to possibly regulate the technology. “This new commission will look closely at how these rapidly evolving technologies are functioning and report back on how we can optimize use to benefit New Yorkers and our economy," Cuomo said in a statement.

Are you sad? Despite the potential for using A.I. to analyze people’s emotions by scanning their faces, a new study indicates that it’s “very hard to use facial expressions alone to accurately tell how someone is feeling, MIT Technology Review reports. The report said, “People do plenty of other things when they’re happy or sad too, and a smile can be wry or ironic.”

Let’s talk wireless communications. The National Science Foundation hopes to gather researchers, federal employees, and companies to discuss how A.I. technologies can benefit the field of wireless communications, reported Nextgov, news site that covers the federal government. “Some of the topics NSF anticipates discussing include the roles that AI can play in spectrum allocation, network monitoring and security, as well as the ways the tech can be applied to boost spectrum sharing,” the Nextgov report said.

ALL ABOARD

Fortune’s Aaron Pressman explores autonomous trains and their impact on the railroad industry, which faces potential competition from trucking companies that are exploring self-driving technologies. As analyst Matt Elkott explains, “I don’t think there’s much debate: This is where the railroads need to go.”

EYE ON A.I. HIRES

Joveo, a recruitment software company, hired Prajakt Deshpande as vice president of engineering. Deshpande was previously the vice president of software development for Oracle, responsible for leading a team of 170 software engineers.

AliveCor has picked Priya Abani to be the health technology startup’s CEO, CNBC reported. Abani was previously the general manager and director for Amazon’s voice-activated digital assistant Alexa.

EYE ON A.I. RESEARCH

Analyzing A.I.’s carbon footprint. Researchers from the Allen Institute for Artificial Intelligence, founded by Microsoft co-founder Paul Allen, published a paper examining deep learning’s toll on the environment, due to the tremendous amounts of computing power required to crunch data. The researchers want to provoke others to invest in A.I. techniques that are more environmentally friendly. Tech-focused news site GeekWire talked to the Allen Institute’s chief Oren Etzioni, who discussed the paper in more detail. “If you make AI greener, it’s not just cheaper, but it opens the way toward more efficient techniques to further push the state of the art,” Etzioni told the publication.

FORTUNE ON A.I.

Softbank Takes the Lead on A.I. – By Alan Murray

How OpenAI, Founded to Keep Powerful A.I. Out of Corporate Hands, Got Into Bed With Microsoft – By Jeremy Kahn

Proposed Federal Law Adds to the Backlash Against Facial-Recognition Technology – By David Z. Morris

BRAIN FOOD

The A.I. ambitions of China’s second-largest insurer. Fortune’s Clay Chandler reports about how Chinese insurance giant Ping An uses machine learning to analyze vast troves of data to more efficiently process claims. Although Chinese companies like Baidy, Alibaba, and Tencent (sometimes referred to as the BAT companies) receive more acclaim for their A.I. projects, Ping An executives believe their company’s decades of collecting insurance-related data provides a unique advantage. Chandler writes: “But Ping An executives argue that quality matters more than quantity. The data its businesses collect is richer than that gleaned by the BAT, they claim, because it involves big-ticket transactions relating to health, wealth, and property—among the most meaningful decisions in customers’ lives.”

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward