Algorithms that anticipate what you'll do next could lead to safer cars and smarter homes.
You’ve been on the road for hours, trying to make good time back home after an exhausting weekend in Vegas. With each blink your eyes stay closed a little longer. Your head nods and then snaps back up. Before it can happen again, possibly resulting in a deadly crash, your car begins to slow down and an alarm blares to jolt you awake.
This future is getting closer to reality, according to a trio of researchers from Cornell University and Stanford University, who have created an artificial intelligence system that can predict with high accuracy what the driver of a car will do next. Applied broadly, this type of research could improve everything from the safety of our cars to the usefulness of our robots.
A paper the researchers published on Thursday, explains how their system can accurately anticipate a driver’s next maneuver better than 90% of the time. Previous efforts, they say, topped out at just over 77%. The team used a method of artificial intelligence called deep learning, which is remarkably good at recognizing patterns in data and currently powers commercial applications ranging from voice recognition to computer vision.
In order to achieve such high accuracy, their experiments analyzed numerous sources of contextual data as 10 drivers navigated a combined 1,180 miles on the road. Sources included cameras, GPS signals and vehicular data such as speed. What really took the researchers over the top, though, was using camera images to identify the positions of drivers’ heads as they steered their vehicles.
The team has dubbed its research Brain4Cars, which is apt given the data with which it has been working. It’s pretty easy to imagine the benefits of an intelligent car, which could theoretically do everything from waking up dozing drivers to preparing the braking or steering systems in anticipation of a sharp turn.
However, this type of anticipatory artificial intelligence system could also prove valuable as robots become more common in our homes and workplaces, and as consumers eye intelligent appliances and other housewares. Think about a robot that could notice when the dog is about to pee on the floor, or a living room that knows from experience just how dim you want the lights at any given time.
Indeed, one of the researchers on the Brain4Cars project is Ashutosh Saxena, director of the Cornell- and Stanford-led RoboBrain project that aims to teach robots everyday facts such as what a person holding a phone looks like, or how to make tea. Saxena is also one of the founders of a stealth-mode startup called Brain of Things, which isn’t sharing many details, but has the tagline “Brain for Your Connected Home.”
When I spoke with Saxena about RoboBrain earlier this year, he explained, “The ability to handle variations is what enables these robots to go out into the world and actually be useful.”
In other words, we want our artificial intelligence algorithms—whether they’re in our robots, our cars or our appliances—to act less like automatons and more like engaged participants in whatever it is we’re doing.
To learn more about driverless cars, watch this Fortune video:
Sign up for Data Sheet, Fortune’s daily newsletter about the business of technology.