Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

Why Salesforce’s Chief Scientist Shut Down an AI Project That Identifies Human Emotions

November 29, 2018, 7:38 PM UTC

While Salesforce has gone as far as to create an AI that can critique employees, it may be too early for an machine learning project aimed at reading human emotions for the firm.

Within the software company, a team of employees once sought to create an emotion classifying AI, Salesforce’s Chief Scientist Richard Socher said Fortune’s Global Tech Forum in Guangzhou Thursday. When Socher prodded the team about what data would be used to teach the AI, the employees revealed plans to use stock images.

“I already knew it was not going to work,” Socher said. “There will be very few examples of old people being happy, so the AI will probably say every person grumpy and say people of certain races are more angry. So basically we shut down that project for now because we really need to think about all different classes of people, communities, and minorities that are going to be impacted by the data.”

AI makes decisions based on what humans train them to do. Accordingly, developers have struggled with uprooting human biases in the realm of AI. Amazon reportedly spent years on a recruitment system that would automate the hiring process, but shuttered those plans in October when the technology began amplifying the prejudices of its human makers. Namely, the AI began favoring male candidates over female candidates.

“If you make hiring decisions based on AI, and you have potentially racist or sexist hiring managers somewhere in your company, then their bias will be part of the data set that will be picked up by an AI algorithm,” said Socher. “I could’ve kind of predicted that that would not work.”

The bottomline, the question of how to train biases out of AI is one not easily answered. And perhaps AI should not be applied at all in certain situations.

“Sometimes it’s hard to get the bias out of your training data and you need to think about algorithmic ways to make sure the biases don’t get amplified,” he said. “And even then that might not be enough, and you really need to rethink whether you should employ certain features.”