“I never forget a face,” some people like to boast. It’s a claim that looks quainter by the day as artificial intelligence research continues to advance. Some computers, it turns out, never forget 260 million faces.
Last week, a trio of Google (GOOG) researchers published a paper on a new artificial intelligence system dubbed FaceNet that it claims represents the most-accurate approach yet to recognizing human faces. FaceNet achieved nearly 100-percent accuracy on a popular facial-recognition dataset called Labeled Faces in the Wild, which includes more than 13,000 pictures of faces from across the web. Trained on a massive 260-million-image dataset, FaceNet performed with better than 86 percent accuracy.
Researchers benchmarking their facial-recognition systems against Labeled Faces in the Wild are testing for what they call “verification.” Essentially, they’re measuring how good the algorithms are at determining whether two images are of the same person.
In December, a team of Chinese researchers also claimed better than 99 percent accuracy on the dataset. Last year, Facebook researchers published a paper boasting better than 97 percent accuracy. The Facebook (FB) paper points to researchers claiming that humans analyzing images in the Labeled Faces dataset only achieve 97.5 percent accuracy.
However, the approach Google’s researchers took goes beyond simply verifying whether two faces are the same. Its system can also put a name to a face—classic facial recognition—and even present collections of faces that look the most similar or the most distinct.
This is all just research, but it points to a near future where the types of crime-fighting, or surveillance-enhancing, computers we often see on network television and blockbuster movies will be much more attainable. Or perhaps a world where online dating is even simpler (and shallower) than swiping left or right on Tinder.
Have a thing for Brad Pitt circa 1998? Here are the 500 profiles that look the most like him.
At first we’ll see systems like Google’s FaceNet and Facebook’s aforementioned system (dubbed “DeepFace”) make their way onto those company’s web platforms. They will make it easier, or more automatic, for users to tag photos and search for people, because the algorithms will know who’s in a picture even when they’re not labeled. These types of systems will also make it easier for web companies to analyze their users’ social networks and to assess global trends and celebrity popularity based on who’s appearing in pictures.
Though Google and Facebook’s advances in facial recognition are relatively new, computer systems like this can be found all around us today. They incorporate an artificial intelligence technique called deep learning, which has proven remarkably effective at so-called machine perception tasks such as recognizing objects (by some metrics, machines are now better at this than are people), recognizing voices, and understanding the content of written text.
Aside from Google and Facebook, companies including Microsoft (MSFT), Baidu, and Yahoo (YHOO) are also investing heavily in deep learning research. The algorithms already power everyday features such as voice control on smartphones, Skype Translate, predictive text-messaging applications, and advanced image-searching. (If you have images uploaded to a Google+ account, go ahead and search them for specific objects.) Spotify and Netflix (NFLX) are investigating deep learning to power smarter media recommendations. PayPal (EBAY) is using it to fight fraud.
There are also several technology startups using deep learning to analyze medical images in real time, and to provide capabilities such as text analysis, computer vision, and voice recognition as cloud computing services. Twitter, Pinterest, Dropbox, Yahoo, and Google have all acquired deep learning startups in recent years. And IBM (IBM) just bought a Denver-based startup called AlchemyAPI to help make its Watson system smarter and bolster its new Bluemix cloud platform. (The idea: Developers can easily connect mobile and web applications to cloud services and therefore build smart applications without ever studying the complex computer science that underpins artificial intelligence.)
That’s not all. As consumer robots, driverless cars and smart homes become real, deep learning will be there, too, providing the eyes, ears, and some of the brains for our new toys. DARPA, the U.S. Department of Defense’s research agency, is also investigating how deep learning techniques might be able to help it make sense of the streams of communications crossing intelligence networks everyday.
Something tells me it’s looking at Google’s FaceNet and getting pretty excited, too.
Derrick Harris (@derrickharris) is a freelance writer.