Getting machines to act more like humans
An international team of data scientists is proud to announce the very latest in machine learning: they’ve built a program that learns… programs.
That may not sound impressive at first blush, but making a machine that can learn based on a single example is something that’s been extremely hard to do in the world of artificial intelligence. Machines don’t learn like humans—not as fast, and not as well. And even with this research, they still can’t.
While Facebook has programs that can recognize almost any face, and “deep speech” programs can recognize the speech in almost any sound, those well-learned machine tasks took a long time and many examples to perfect.
Humans, on the other hand, are very good at learning from a single example: “this is a cat” or “that is a coffeemaker.” After just one introduction, we can pick out other examples easily in daily life. But machines typically need to see many more examples to learn about something new.
Thursday’s new research, the cover story in Science, comes one step closer to getting machines to learn new things in a one-shot manner, more like humans do; classifying whole swaths of data based on an understanding of how an image is created, instead of using tens of thousands of examples of an image and fancy statistics to “learn” what makes a match.
In the research, scientists from NYU, MIT and The University of Toronto used a new model to get machines to learn to recognize handwritten letters from languages around the world based on just a few examples. They’re calling it Bayesian Program Learning (BPL), named for the Bayesian statistical model that relies more on inference than repetition or frequency.
“It learns about how people typically draw,” says study co-author Brenden Lake, a PhD cognitive scientist at NYU who developed the algorithm.
The new model is unlike the deep learning models many machines use today, which often drill down to the pixel level of a letter or figure. A machine today might recognize a certain collection of pixels. But in the BPL computer model, the algorithm is classifying figures more conceptually, focusing on the basic structure of a letter that can be seen again and again.
The “computer assumes that characters are made up of strokes,” Lake says, just like a person attempting to copy a letter would guess there were multiple brushstrokes. In tests, the machines copycat those new letters just as well as humans.
In time, the learning technique that finds its own ways to draw letters could do more practical things like read sign language, improve speech recognition software in smartphones and maybe eventually model military action plans.
The hope is that, in tandem with deep learning techniques, the new model will be able to recognize and sort more efficiently and intelligently. It’s just writing alphabet soup for now, but it’s one step closer to making “are you a computer?” a more difficult question to crack.
Subscribe to Data Sheet, Fortune’s daily newsletter on the business of technology.
See how Google’s new robots can navigate trails: