Researchers at Stanford University have created an artificial intelligence algorithm that can tell if your rash is a cancer concern or a harmless pockmark – and it can do it about as well as a human doctor.
The tech is fueled by deep learning programs and a 130,000 image database of high-quality medical imagery. By pumping pictures of lesions, bumps, moles, and other skin abnormalities into the system – and corresponding information about whether or not these actually indicated skin cancer – the Stanford team created a platform that can assess images that it had never seen before (with an assist from a Google-produced image classification program).
Click here to subscribe to Brainstorm Health Daily, our brand new newsletter about health innovations.
“We realized it was feasible, not just to do something well, but as well as a human dermatologist,” said Sebastian Thrun, a professor at the Stanford Artificial Intelligence Laboratory. “That’s when our thinking changed. That’s when we said, ‘Look, this is not just a class project for students, this is an opportunity to do something great for humanity.'”
Just how on-point is the tech? Overall, the accuracy rate was about 91% compared with human doctors, according to the researchers. And while the program isn’t quite on par to replace a human eye yet, the Stanford team hopes that it can eventually be used to develop smart phone apps that make initial screenings possible at home.
There are still plenty of bridges to cross before the algorithm gets to that point. For one, the images used in the initial tests are medical grade and far higher quality than any kind of picture that can be taken with a smartphone.
But Stanford is far from the only institution chasing AI technology that can at the very least assist (if not supplant) medical professionals who assess medical imagery. Both GE Healthcare and IBM Watson Health have launched programs to use artificial intelligence to read x-rays, MRIs, and other screening imagery.