FORTUNE — When Tom Cruise swiped and pinched his way through a computer interface in 2002’s Minority Report, audience-goers were wowed. Here was a compelling glimpse at the future of the computer interface, one no longer tethered by the dreary, decades-old mouse. Ever since, the public — especially tech reporters — has waited for real-world technology to catch up. And while Microsoft’s (MSFT) wildly-popular Kinect controller has made great strides, a new $70 peripheral due out this winter called “The Leap,” is poised to take things further.
Co-founded by Michael Buckwald and David Holz, Leap Motion’s mini-candy bar-shaped device connects to a computer via USB and emits infrared light that recognizes objects — arms, hands, fingers, pens, even chopsticks — within a distance roughly equivalent to arm’s-length. The key to Leap is that it’s 100 times more accurate than the current version of the Kinect sensor. Whereas the Kinect will recognize hand movements, Leap’s creators argue their creation is so fine-tuned, it registers the slightest finger quiver with no perceptible delay. The possibilities for such advanced (and inexpensive) technology could be endless, but at the very least, it means quick and hyper-accurate navigation across desktop applications. When Leap launches, it will already be backwards compatible with Windows 7 and 8 as well as Mac OS X, allowing basic navigation through the operating system and web surfing. The company is also releasing an SDK and giving sensors to select developers who want to develop for the system.
Buckwald and Hotz showed off their tech and let me get some hands-on time. Our first dive was Fruit Ninja, the casual game where users slice-and-dice fruit. On mobile devices, users swipe their fingers across the screen sending fruit gibs flying about, which is satisfying enough. With the Leap, users use a finger or an item like a pen or marker, waving it around like a sword. With the sensitivity level cranked all the way up, I hacked-and-slashed fruit merely my moving my finger a few centimeters in different directions, all while most of my hand and wrist remained stationary. The onscreen cursor, a representation of my finger, moved smoothly, swiftly, and accurately with no detectable delay.
Next was Google (GOOG) Earth, which again, was completely navigable by hand. Holz pulled up a map of a mountain region. Because I’d become used to gestures on my smartphone, tablet, and MacBook Pro, I tried zooming in on different areas of geography by pinching. With the Leap, you’re not interacting with a 2-D surface but a 3-D space. So while actions like moving left or right, up and down, and zooming in or out might require multiple gestures on a phone or tablet, doing that with this device could be accomplished in one. To illustrate, Holz showed me how to do that by my rotating my finger slightly, all while moving it forward and up. It felt intuitive, but to a degree I’d never experienced before.
It’s too early to tell what developers will drum up and whether the device finds the kind of massive adoption Kinect has with consumers, hackers and even robotics scientists. But if the Leap lives up to expectations when the final version arrives, it’s certainly got a shot at revolutionizing the desktop.