Computers today are smart. Recently, they’ve mastered the art of one of the most difficult board games. They’ve proven themselves capable of reading human expressions well enough to pick a “most smiled at” ad out of the Super Bowl. And one clever bot recently scored 59% on its 8th grade science test—barely missing the mark for a passing grade.

But despite recent gains in artificial intelligence, humans still best machines at one key task: learning how to see.

A new report published this week in the Proceedings of the National Academy of Sciences of the United States of America (PNAS) points out that for all our human flaws, we’re still much better than machines at learning how to recognize images, especially the smallest and blurriest.

In the study, a crowd of 14,000 people tested their sights on simple images like airplanes, ships, eagles and bicycles. The pictures were small and hard to see: part of the handlebars on a bike, just the nose of a fly, or a very blurry plane wing. The researchers found that even the best computers couldn’t come close to humans at identifying these image fragments. In one of the tests, computer recognition rates for minimal images were up to ten times worse than people.

But the point of the study wasn’t to measure man against machine. Instead, the researchers want to use the way people learn to see to build better machines.

Get Data Sheet, Fortune’s technology newsletter.

Lead study author Shimon Ullman, a computer science professor at the Weizmann Institute of Science, says with more discerning, human-style vision, computers could become better assistants. Then they wouldn’t just be able to ID tiny image fragments like us; they could learn to analyze more of the world around them like we do, too.

Driverless cars might be able to pick up on more subtle cues, such as determining if a neighboring driver is paying attention to the road. More intelligent computer sight could also help build personal assistant apps and robots that can discern our faces well enough to better understand our needs.

As humans, we learn to quickly pick out the most critical identifying features of an image, something computers can’t quite grasp. Then we start processing, sending that information to regions of the brain that analyze in more complicated and abstract ways, based on experience and memory. The problem is that unlike humans, computers rely on a straightforward learning system.

“They have a task, a fixed goal: taking the image in,” Ullman says.

Programmers have already found some impressive ways for computers to learn to process images quickly and efficiently. One of the best is called “deep learning,” and it’s used by everyone from Facebook to the State Department. The way computers deep learn is by being exposed to thousands upon thousands of examples. Then bots can identify faces, recognize voices, or pick out human emotions at warp speed.

But deep learning is completely different from the way a baby learns to see. “You do not take a child outside and identify point after point,” Ullman says. “People just get it in a single glance.”

Humans v. Computers: Are We Asking The Wrong Question?

Ullman thinks that is a key reason why we’re so much better at recognizing tiny image fragments or blurry, twisted letters on a computer screen.

Of course, before computers can take advantage of the way humans see, scientists need to understand more about how exactly the brain’s vision system develops. And Ullman says that is still one big mystery hiding behind our eyes.