Photograph by Gabe Souza — Portland Press Herald via Getty Images
By Derrick Harris
September 11, 2015

Apple (AAPL) got a lot of attention earlier this week for the new voice recognition features in the upcoming version of Apple TV, the company’s set-top box for streaming movies, television and more. But there’s one group of consumers who probably didn’t pay much attention: those who can’t hear.

Although being able to control devices and the movies you watch by saying “Show me Brad Pitt movies” or “Turn up the volume” might seem great to most TV viewers, it’s not very useful if you can’t speak, or can’t speak well enough for a voice recognition system to understand. However, new research published this week in arXiv, an online archive of research papers managed by Cornell University, claims a significant improvement in computers that can understand American Sign Language.

A trio of researchers from the University of California, San Diego, claim their new deep-learning-based system can accurately recognize signed letters and numbers more than 85% of the time when exposed to new test subjects. The ability of technology to recognize signs from new people—rather than those whose signs were used to train the system—is important for sign-language recognition to become commercially viable technology like voice recognition.

While consumers often love the idea of interacting with their phones, TVs, and other connected devices, it’s not clear they also love the idea of having to train the software inside to recognize what they’re saying. We want our devices to be smart, but we also want them to be easy.

Personally speaking, while I love asking my Amazon Echo connected speaker to play music or add something to my shopping list, I long ago gave up opening the app and verifying that it heard me correctly. The same goes for providing information about Amazon (AMZN) purchases, reviewing Netflix (NFLX) movies, answering”Was this useful” questions on the Google Now (GOOG) personal assistant app, and other opportunities to provide feedback.

And while the UCSD researchers acknowledge field of sign language recognition is still in its early days—we’re talking about letters and numbers here, for example, not words or sentences—it’s not hard to envision seeing consumer devices that recognize signs. Computers, smart TVs and robots already have cameras, processors and web connections built in, so much of the infrastructure is already in place. In fact, other researchers, including some at Microsoft (MSFT), are also working to make the company’s Kinect motion sensors recognize sign language.

The paper out of UCSD also notes the importance of understanding factors such as facial expressions and body posture to truly understand sign language, and those are also areas where plenty of research is happening. For example, there’s a startup out of MIT called Affectiva that specializes in artificial intelligence systems for recognizing facial emotion. And last year, a team of New York University researchers that included Facebook (FB) AI boss Yann LeCun published research about advanced techniques for detecting human poses.

When all of this work starts coming together, we’ll really start seeing some amazing things.

SPONSORED FINANCIAL CONTENT

You May Like

EDIT POST