Why Google’s Artificial Intelligence Confused a Turtle for a Rifle

November 8, 2017, 1:00 PM UTC

When artificial intelligence works smoothly, computers are able to spot cats in photographs at lightning-fast speeds. When it goes wrong, they can confuse images of turtles with rifles.

Researchers from MIT’s computer science and artificial intelligence laboratory have discovered how to trick Google’s (GOOG) software that automatically recognizes objects in images. They created an algorithm that subtly modified a photo of a turtle so that Google’s image-recognition software thought it was a rifle. What’s especially noteworthy is that when the MIT team created a 3D printout of the turtle, Google’s software still thought it was a weapon rather than a reptile.

The confusion highlights how criminals could eventually exploit image-detecting software, especially as it becomes more ubiquitous in everyday life. Technology companies and their clients will have to consider the problem as they increasingly rely on artificial intelligence to handle vital jobs.

For example, airport scanning equipment could one day be built with technology that automatically identifies weapons in passenger luggage. But criminals could try to fool the detectors by modifying dangerous items like bombs so they are undetectable.

All the changes the MIT researchers made to the turtle image were unrecognizable to the human eye, explained Anish Athalye, an MIT researcher and PHD candidate in computer science who co-led the experiment.

After the original turtle image test, the researchers reproduced the reptile as a physical object to see if the modified image would still trick Google’s computers. The researchers then took photos and video of the 3-D printed turtle, and fed that data into Google’s image-recognition software.

Sure enough, Google’s software thought the turtles were rifles.

Get Data Sheet, Fortune’s technology newsletter.

MIT publicized an academic paper about the experiment last week. The authors are submitting the paper, which builds on previous studies testing artificial intelligence, for further review at an upcoming AI conference.

Computers designed to automatically spot objects in images are based on neural networks, software that loosely imitates how the human brain learns. If researchers feed enough images of cats into these neural networks, they learn to recognize patterns in those images so they can eventually spot felines in photos without human help.

But these neural networks can sometimes stumble if they are fed certain types of pictures with bad lighting and obstructed objects. The way these neural networks work is still somewhat mysterious, Athalye explained, and researchers still don’t know why they may or may not accurately recognize something.

The MIT team’s algorithm created what are known as adversarial examples, essentially computer-manipulated images that were crafted to fool software that recognize objects. While the turtle image may resemble a reptile to humans, the algorithm morphed it so that it shares unknown characteristics with an image of a rifle. The algorithm also took in account conditions like poor lighting or miscoloration that could have caused Google’s image-recognition software to misfire, Athalye said. The fact that Google’s software still mislabeled the turtle after it was 3D printed shows that the adversarial qualities embedded from the algorithm are still retained in the physical world.

Although the research paper focuses on Google’s AI software, Athalye said that similar image-recognition tools from Microsoft (MSFT) and the University of Oxford also stumbled. Most other image-recognition software from companies like Facebook (FB) and Amazon (AMZN) would also likely blunder, he speculates, because of their similarities.

In addition to airport scanners, home security systems that rely on deep learning to recognize certain images may also be vulnerable to being fooled, Athalye explained.

Consider cameras that are increasingly set up to only record when they notice movement. To avoid being tripped by innocuous activity like cars driving by, cameras could be trained to ignore automobiles. To take advantage, however, criminals could wear t-shirts that have been specially designed to fool computers into thinking they see trucks instead of people. If so, burglars could easily bypass the security system.

Of course, this is all speculation, Athalye concedes. But, considering the frequency of hacking, it’s something worth considering. Athalye said he wants to test his idea and eventually make “adversarial t-shirts” that have the ability to “mess up a security camera.”

Google and other companies like Facebook are aware that hackers are trying to figure out ways to spoof their systems. For years, Google has been studying the kind of threats that Athalye and his MIT team produced. A Google spokesperson declined to comment on the MIT project, but pointed to two recent Google research papers that highlight the company’s work on combating the adversarial techniques.

“There are a lot of smart people working hard to make classifiers [like Google’s software] more robust,” Athalye said.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward