The artificial intelligence that governs self-driving cars is going to make an unsympathetic defendant according to Peter Norvig, the director of research at Google speaking at the Rutberg Future Mobile conference in San Francisco on Wednesday. After being asked about the legal implications of a self-driving car having to choose between a bad option and worse option on the roads, and someone getting hurt, Norvig explained the challenges the car’s program—or any AI—would face in the court.
He said a self driving car has 17 cameras and is covered in sensors so it can’t say it didn’t see what was happening. And because it’s a robot that is programmed to analyze a situation, it can’t say it didn’t know what it was thinking since the code is right there. “But if the defense is that 999 out of 1,000 this was the right thing to do in this case, the lawyer is going to say we’re not trying those cases, we’re trying this case,” Norvig said. And at that point, the case is likely lost.
A possible solution to this challenge, might be building a model of code that’s more transparent and better at explaining the decision-making process that artificially intelligent programs make, said Adam Cheyer, who is a co-founder of a Viv an AI startup and one of the creators of Siri. But the panelists also seemed aware that they were fighting a losing battle against public perception of AI as an “other” that would somehow cause problems for humanity.
The sense of frustration was obvious in all of the panelists as they fielded a series of questions that ranged from AI controlling weapons systems to whether creating an AI would be the last thing humanity will accomplish (because it would somehow lead to our downfall). Babak Hodjat, co-founder of Sentient Technologies, tried hard to be diplomatic, pointing out that AI is already in use in many systems and software we use today, including in weapons and fighter planes. “AI is going to be in a lot of software … and technology might be the reason humanity ends, but AI is not the reason why humanity ends.”
Norvig was a bit more blunt. “I think that’s just typecasting. Look, when you have two actors try out for a role and one’s a human and one’s a robot, the evil one is always the robot.”
The panelists spent the rest of their time trying to explain the limits of AI— and that they weren’t trying to replicate human consciousness in silicon. “What do we want to duplicate people for?” said Norvig. “We already know how to do that.”
Subscribe to Data Sheet, Fortune’s daily newsletter on the business of technology.