This essay appears in today’s edition of the Fortune Brainstorm Health Daily. Get it delivered straight to your inbox.
How transformative can it be when you teach a computer to read images? Well, we’re getting an early glimpse of that this morning with the release of a JAMA paper by a team of Google researchers who trained a deep convolutional neural network to read photomicroscopic images of the backs of human eyes.
Varun Gulshan, Lily Peng, and colleagues used a deep learning algorithm to study 128,175 retinal images drawn from patients in the U.S. and India that were later reviewed for diabetic retinopathy (DR) by a group of 54 U.S.-licensed ophthalmologists. DR is a condition in which the tiny blood vessels in the light-sensitive tissue that lines the back of the eye (the retina) deteriorate. Chronic high blood sugar can damage the vessels, causing them to bleed or leak fluid, which distorts vision and can lead to blindness—a risk of profound concern to 415 million people with diabetes around the world.
“The nearly 130,000 images in this development set were graded by at least three ophthalmologists—sometimes up to seven if it was a tricky case—and then we trained an algorithm based upon those grades and those images,” says Google’s Lily Peng, a physician scientist trained at UCSF, who is the corresponding author on the JAMA paper. Then the team tested the model’s ability to identity and properly grade DR on two “clinical validation sets” of retinal scans (11,711 images in all) that had already been expertly characterized by eye specialists.
Overall, the Google algorithm detected DR on the test images with both high sensitivity and specificity. “We basically showed that we are on par with U.S. board-certified ophthalmologists who had graded the validation sets,” says Peng.
Why is this important? Diabetic retinopathy can be prevented if caught early—but relatively few people around the world have access to expert screening. That’s where Google’s algorithm comes in. It can conceivably be put to use anywhere—or anywhere a smartphone or tablet can work.
“While it may take acres of computer farms to actually train the model,” says Peng, “the model itself—once trained—is actually not that big, and can fit on even a mobile device.” That, in fact, is one of the things the Google team is now working on—in concert with some hospitals in India.
Who knows? Maybe in the next generation of smartphones, diabetics will be able to scan their own eyes for an early warning sign.