Audi’s virtual cockpit

A spectrum of 'possible': Companies in an array of industries are working to shatter the status quo. Here are two.

By Kirsten Korosec
December 29, 2014

Like Oz in the Emerald City, engineers at Nvidia NVDA , based in Santa Clara, Calif., are working to give the next generation of automobiles a brain capable of understanding the world around it. “The car is rapidly going to go from the most stupid electronic device a consumer owns to the most powerful supercomputer a consumer will ever own—way more powerful and sophisticated than your phone, tablet, or PC,” says Rob Csongor, vice president and general manager of Nvidia’s automotive business. The ability for a car to detect and understand will require a staggering amount of processing power as automakers add ever more sensors for driving assistance, infotainment, and navigation. A videogame darling, Nvidia decided last year to merge its graphics and computing architectures. The result? Its Tegra K1 mobile processor, which is pushing the automotive industry closer to the holy grail of self-driving cars. The chip is the basis of Audi’s zFAS piloted driving system. It’s a laptop-size module that can manage highly autonomous driving features such as traffic-jam assist up to speeds of 40 mph. (It will be in production models in about 4½ years, Audi says.) An Nvidia processor also powers the 2016 Audi TT’s virtual cockpit, an all-digital instrument cluster that puts controls for radio, HVAC, and navigation into the driver’s line of sight. Nvidia’s next supercomputer on a chip? Just around the corner. That means a smarter car is too.

This story is from the January 2015 issue of Fortune.

For more stories from our Shape the Future package, click here.

You May Like