Elon Musk is confident his company’s all-electric, autonomous vehicles will be on the road in two years.
Meanwhile, Google (GOOG) parent company Alphabet is planning big things for its own driverless cars, which is set to become an independent business in 2016.
Although, as you might expect, driverless technologies are expensive and could impact not only the final cost for these type of cars, but how long it takes to bring them to market.
The technology that serves as the eyes of a driverless car, the car’s sensor, is typically a mix of high-end laser beams, radar technology and cameras. Some of Google’s most recent driverless sensors are priced at $80,000 a piece, while less advanced models can run about $8,000 per part.
But a group of researchers at the University of Cambridge think they may have found an inexpensive fix for some of those pricey self-driving car parts. The U.K. team has developed a new sensor system that relies on camera technology readily found in any smartphone.
SIGN UP: Get Data Sheet, Fortune’s daily newsletter about the business of technology.
The technology, called SegNet, can quickly and accurately “see” what’s happening outside any vehicle by scanning its surroundings and sorting features into recognizable objects, everything from telephone poles to other cars on the road and buildings.
The new system relies on deep learning, which requires teaching the system how to tell objects apart by feeding it tens of thousands of different images. The system itself doesn’t have a price tag just yet, but the idea is that since it only requires a camera, and some extra computer processing, it would cost no more than $100—instead of the thousands spent today—according to researchers.
“That’s the beauty of this system, it requires computing power and a camera” says Alex Kendall, a PhD engineering student at Cambridge, and one of the inventors of the SegNet system. Kendall says the technology is almost road-ready in Cambridge, where it was created, but might need more work if it were to be used in snowy environments or the desert (where it would be unfamiliar with those types of environmental conditions).
WATCH: For more on Google’s self-driving cars, check out the following Fortune video:
For now, SegNet will probably stick to less ambitious navigation tasks, such as helping vacuums navigate around the house better or preventing domestic robots from getting lost.
When I tried out the technology at home, using photos taken of my street, SegNet couldn’t process the images displayed in front of it. But, maybe, with a little more time, it could one day help cars see well enough to autonomously pick up and drop off passengers anywhere in the world.