Tesla’s semi-autonomous car technology has helped make the company a $54 billion success story. But the system still has flaws, according to a Stanford University robotics researcher, who says it shows a disturbing inability to recognize cyclists.
Heather Knight is an expert in human-robot interfaces with a PhD from the Robotics Institute at Carnegie Mellon University, and is doing post-doctoral research in social robotics. She published an essay about Tesla’s self-driving tech on the blogging platform Medium, entitled “Tesla Autopilot Review: Bikers Will Die.”
In the post, Knight described how she and a colleague from Stanford took a Tesla (TSLA) for a drive to see if its self-driving technology lived up to their expectations about human-robot interaction.
Although Knight said Tesla’s autopilot worked well in most situations while the two drove around the streets and highways of southern California, she said she found the car’s “agnostic behavior around bicyclists to be frightening,” and that she was concerned some drivers would ignore its limitations, and by doing so “put biker lives at risk.”
In particular, Knight said the Tesla’s “situation awareness display,”which helps human drivers understand what the car’s self-driving technology can “see” in the environment around it, was fairly accurate with cars, but not even close to accurate with cyclists.
Get Data Sheet, Fortune’s technology newsletter.
The Stanford researcher said: “I’d estimate that Autopilot classified ~30% of other cars, and 1% of bicyclists. Not being able to classify objects doesn’t mean the tesla doesn’t see that something is there, but given the lives at stake, we recommend that people NEVER USE TESLA AUTOPILOT AROUND BICYCLISTS!”
There appear to have been no reported incidents where Tesla cars driving on autopilot came into contact with cyclists, although there was an accident in Norway in 2016 where authorities said a Tesla failed to properly recognize a motorcycle and injured the rider.
In a Wall Street Journal article last year about user perceptions of Tesla’s autopilot feature, one driver said his car actually prevented a potential accident with a cyclist who was texting and swerved in front of him suddenly.
In a number of responses to readers of her piece, both on Medium and on Twitter, Knight said that she likes Tesla, but was trying to point out that its system is still what she called a “human in the loop” technology because of such inaccuracies in detection.
Tesla makes it clear to drivers that its autopilot system is not fully autonomous, and that they should keep their hands on the steering wheel and pay attention to their surroundings at all times. If a driver takes their hands off the wheel for a sustained period of time, the autopilot disengages and can’t be re-enabled unless the car is stopped.
Despite its flaws, Knight gave Tesla’s Situation Awareness Display an A+ rating and said was her favorite feature of the car, because “it helps the driver understand shortcomings of the car, i.e., its perception sucks.”
Robot-based systems in general, she added, “would benefit from communicating their limitations to people.”
Some of those who responded to Knight’s essay said that it didn’t provide enough evidence of flaws in the detection system, and that the headline on the article was unnecessarily inflammatory about the risks that autonomous driving poses to cyclists.
Knight said that her concern is that people will see the so-called “autopilot” feature as being fully autonomous at all times, and will fail to pay attention to their surroundings, “as we see in the fatal crashes so far.” Even if detection rates improve, she said, it’s important that human drivers “have the correct mental model of the car, complete with its shortcomings.”
Fortune has asked Tesla for a comment on Knight’s conclusions, and will update this post if and when one is provided. The headline on this post was edited at 3:17 on May 29 to more accurately reflect the nature of Tesla’s semi-autonomous driving technology.