Skip to Content

An Offer You Can’t Refuse: How A.I. Is Poised to Transform Negotiations. Eye on A.I.

Negotiation is a fundamental business skill—one that is inextricably bound up with human emotion and psychology as much as economic calculus. Perhaps one day, robot lawyers will go forth to negotiate on our behalf. But, in the meantime, at least one negotiation expert thinks A.I. can be used today to improve humans’ negotiation tactics.

Jared Curhan is a professor at M.I.T.’s Sloan School of Management who specializes in negotiation. His particular focus is on what researchers call “subjective value.” That’s a fancy term for how people feel about the outcome of a negotiation: do they think they got a fair deal or got screwed?

Negotiation theorists once dismissed subjective value as irrelevant. But Curhan and others have shown that it is a good predictor of economic payoffs—especially when parties will have to negotiate with one another more than once over the course of a relationship.

Like most negotiation trainers, Curhan teaches students through role-playing exercises. For the past several years, he’s been using software that can run these games and provide immediate feedback to students on their performance in the form of computer-generated graphs and charts showing how well they’ve done in both economic and subjective terms. Now he’s adding A.I. to that mix.

Curhan has partnered with a Danish company called iMotions, that creates software for tracking human emotions. Part of iMotions' platform is powered by Affectiva, a company that was spun out of MIT’s famed Media Lab. Affectiva uses computer vision to identify emotions from facial expressions. Curhan is using iMotions' software to analyze videos of negotiation simulations. “We are trying to isolate particular emotions that have influence on the outcome of negotiations,” Curhan says. “How do you make a positive impression on your counterparty and how does that relate to your facial expression?”

Right now, Curhan is just gathering data. “We don’t know which emotional expressions are favorable and which aren’t,” he says. He hopes to find out—and then teach students how to alter their facial expressions to avoid negative outcomes. He even envisions a day when an A.I.-enabled alarm could warn a negotiator when a counterparty’s facial expressions indicate a bargaining session is about to go south.

That’s just one example of A.I.’s implications for negotiation. Another is IBM’s “Project Debater” A.I.  It can analyze a proposition and automatically highlight the best arguments for and against it, factoring in both logical and emotional impact. These technologies could transform business negotiations—and probably do so long before we have robot lawyers.

Jeremy Kahn
@jeremyakahn

This story has been updated to reflect the correct name of the M.I.T. business school. It is the Sloan School of Management, not the Sloan School of Business, as originally stated. It has also been updated to clarify that Curhan uses software from iMotions and to make clear the relationship between iMotions and Affectiva's facial recognition technology.

A.I. IN THE NEWS

Elon Musk and Jack Ma debate A.I. The two billionaire entrepreneurs had a rambling on-stage discussion at the World A.I. Conference in China. SpaceX and Tesla founder Musk repeated his oft-voiced concerns about humanity’s inability to cope with a future super-human artificial general intelligence. Ma, the founder of Alibaba, said he was more optimistic, predicting A.I. could usher in a 12-hour work week, according to Fox Business. Ma came off looking more “grounded” compared to the jet-lagged, spacey Musk, according to Bloomberg.

Apple apologizes over monitoring of Siri sound clips. The iPhone maker apologized to users for secretly allowing contractors to review recordings of their conversations with its digital assistant Siri, the Associated Press reported. The human reviewers helped perfect the natural language processing abilities of the assistant. The news about Siri follows similar revelations about digital assistants from Amazon, Microsoft, and Google. 

Publishers sue Audible over A.I.-powered subtitles. Seven U.S. publishing housing are suing audio book company Audible over its plan to use machine learning to automatically transcribe audiobooks into text, according to the Associated Press. The publishers claim the feature violates copyright law.

Waymo releases huge self-driving data set. Waymo, the self-driving car company owned by Google parent Alphabet, has released a dataset of 1,000 recordings of 20 seconds of drive time by different vehicles in different roads. Drago Anguelov, Waymo’s principal scientist, told TechCrunch it was making the data available to academics so they could conduct research that might be helpful to the cause of making self-driving cars a reality. “It is not an admission in any way that we have problems solving these issues,” Anguelov said. “But there is always room for improvement.”

PENTAGON WANTS TUNNELS FOR ROBOT TRAINING

The Pentagon’s Defense Advanced Research Projects Agency (DARPA) says it is looking for a private underground tunnel system in which it can train autonomous drones and robots. The agency, which works on deep tech with potential military applications, said in a federal contract request posted on August 20 that is was seeking “information on university-owned or commercially-managed underground urban tunnels” that it could use to test technology designed to “rapidly map, navigate and search unknown complex subterranean environments to locate objects of interest.” DARPA may already have a particular underground labyrinth in mind for the work since it gave interested tunnel owners just 10 days to respond. 

EYE ON A.I. TALENT

GreyOrange, a Singapore-based software and robotics company, has hired Jeff Cashman as senior vice president and global chief operations officer. Cashman has most recently been CEO of Ally Commerce, an e-commerce service provider for brand manufacturers. He will be based out of GreyOrange’s U.S. headquarters in Atlanta. 

Enlitic, a San Francisco company developing A.I. to assist doctors in making diagnoses, announced a slew of new hires for its leadership team, including Darren Scott as chief financial officer, Leo Gurvich, as senior vice president, business development, Mark Freudenberg, as vice president of research and development, Jordan Francis as creative director, Adam Odeh as senior manager, regulatory affairs and quality assurance and John Ordoña as head of global communications.

EYE ON A.I. RESEARCH

Environment for testing reinforcement learning algorithms. DeepMind, London-based A.I. research company, which is owned by Google parent Alphabet, released OpenSpiel, a public framework for creating reinforcement learning algorithms and then benchmark testing them on more than 20 different kinds of games. 

A neural network can predict your biological age and your sex from your heartbeat. Just don’t ask why. Researchers at the Mayo Clinic College of Medicine and Science, based in Minnesota, found they could train a neural network to predict a person’s biological age with 90% accuracy and their sex with 72% accuracy, by analyzing an electrocardiogram. The difference between biological age and actual age can be used to identify at risk patients who need medical intervention, the researchers said in a paper published in the journal Circulation: Arrhythmia and Electrophysiology. But, due to the black box nature of the algorithm, they said they could not tell for certain exactly how the neural network was able to make its determinations.

A better way to train chatbots. Researchers at the University of Lincoln in the U.K. and Samsung Research’s Artificial Intelligence Group in Seoul, South Korea, have hit upon what they say is a better way to train chatbots to exhibit more human-like dialogue. Unlike previous efforts to conquer this problem, the researchers did not use  human-labelled data or human judges to determine if the computer-generated dialogue was “human-like.” (Although they did conduct a follow-up test using human judges.) Instead, they used only unlabelled text data from human-to-human dialogues. In a paper published on Arxiv, the researchers describe training a group of 100 deep neural net algorithms which compete against one another in generating dialogue to most closely match dialogue taken from the real human-to-human chats.

FORTUNE ON A.I.

Watch: Georgia Tech’s Robot MacGyver Can Fashion Tools From Spare Parts – By Lisa Marie Segarra

Amazon’s Ring Partners With 400 Police Forces, Adding Fuel to an Already Raging Privacy Debate – By Kevin Kelleher

Deloitte’s Plan for Fighting Employee Burnout: Let AI Take Over the Dreaded HR and IT Tasks — By Anne Fischer

BRAIN FOOD

Maybe killer robots and drones aren’t as bad an idea as you think. Lucas Kunce, a Marine who served in Iraq and Afghanistan, wrote a provocative opinion piece in The New York Times last week chastising tech workers at companies such as Google and Microsoft who have objected to their employers working with the U.S. military, especially on applications involving artificial intelligence. While many human rights campaigners and A.I. researchers have raised deep concerns about incorporating A.I. into autonomous weapons systems, Kunce argues that A.I. could very well help save civilian lives—as well as the lives of U.S. soldiers—on the battlefield. “We need tools that enhance situational awareness, provide information that overcomes fear and fatigue, and enable fast, effective and precise combat decisions for both commanders and individuals,” Kunce writes. “For me, it’s hard to understand why tech employees would not want to help their fellow Americans survive on the battlefield and accomplish their missions in the safest and least damaging way possible.”