Carnegie Mellon
mewtu/Wikimedia
By Stacey Higginbotham
December 23, 2015

Computers are performing jobs once reserved for humans like translators, personal assistants, and hotel bellmen. In fact, they are getting so good at some things, that it’s starting to feel a little creepy.

But next year, expect that ick factor to multiply. Computers will be able to figure out whether you’re happy, sad, or angry by merely watching the tiny involuntary muscle movements in your face.

Andrew Moore, dean of Carnegie Mellon’s computer science school, says a combination of better algorithms and high-definition cameras mean this technology will make its way out of the labs and into stores, web sites and even hospitals. The idea is to help humans interact with computers, which are otherwise oblivious to certain subtleties that come with being able to read a person’s expressions.

For example, the emotion reading technology could be put to use to help a web site customer interact with an automated customer service agent. The computer would understand if a customer was actually getting the help they needed or just typing that they did.

People talking face to face inevitably make minor movements that signal their interest or confusion that help the conversation along, says Moore. But until recently, computers have lacked the ability to read facial emotions and have instead focused only on written or spoken words, using natural language recognition technology.

Moore says adding an extra layer of understanding will improve interactions tremendously. That same dynamic will play out in healthcare settings, he predicts, where having the ability to recognize emotions could help a human or even a robot caregiver recognize pain or depression in patients who may not even disclose it.

WATCH: See how Facebook’s CEO wants to create AI systems “that are better than humans.”

In Japan, robots are already being used to interact with patients. In the U.S., policymakers are experimenting with telemedicine to bring rural areas better access to doctors.

However, it’s also a little unsettling to imagine using artificial intelligence in certain environments. The technology creates the possibility for misuse or privacy violations.

In a retail store for example, emotion reading computers could be used to identify whether shoppers or employees are nervous and flag them as potential shoplifters. They also might also pick out customers who really covets a particular Gucci purse or Armani coat and send a salesperson to give them a hard sell.

SIGN UP:Get Data Sheet, Fortune’s daily newsletter about the business of technology.

There are also concerns about using artificial intelligence as some kind of high-tech version of a lie detector test that would read micro expressions to tell if someone is telling the truth. This might be used at borders or in law enforcement settings. Or even when people interview for jobs.

I downloaded emotion testing software called IntraFace on my phone to test its accuracy. A marketing video says it is being used in augmented reality and to add makeup to people’s faces in retail settings, so people can try a color out before they buy. The company advertises that its technology could be used to stop distracted driving by detecting whether people are looking at the road. Another scenario is in a classroom where a student sneaks glances at his phone instead of at the teacher.

It’s all a bit too surveillance state for me, but as Moore says, it is coming. I found the IntraFace software worked fine when it came to recognizing five different emotions, when I made faces at my phone’s camera. Even those that weren’t terribly exaggerated.

Like any technology tool it can be used for good or ill. Next year, we’ll see how computers do as they start understanding how we feel.

SPONSORED FINANCIAL CONTENT

You May Like

EDIT POST