Robotic helpers have the potential to usher in a new age of leisure, but as Jerry Kaplan warns in his new book, Humans Need Not Apply, the transition won’t always be smooth. In the excerpt below, he navigates the uncharted legal waters of making the punishment fit the criminal.
Consider the following scenario. Imagine that you purchase a personal home robot that is capable of taking the elevator down from your tenth-floor Greenwich Village apartment, crossing the street, and purchasing a caramel flan Frappuccino for you from Starbucks. (This isn’t entirely science fiction. A prototype of just such a robot was recently demonstrated at Stanford.) In addition to being preprogrammed with a variety of general behavioral principles, the robot is able to hone its navigational and social skills by watching the behavior of the people it encounters. After all, customs and practices vary from place to place. It might be appropriate to shake hands with females you meet in New York, but it is forbidden in Iran unless you are related. Unbeknownst to you, your robot recently witnessed a rare event, a Good Samaritan subduing a purse snatcher until the police arrived, earning the approval and admiration of a burgeoning crowd of spectators.
On the way to fetch your coffee, your robot witnesses a man grappling with a woman, then taking her purse, over her apparent objections. It infers that a crime is taking place and, consistent with its general programming and its specific experience, it wrestles the man to the ground and detains him while calling 911.
When the police arrive, the man explains that he and his wife were merely having an animated tussle over the car keys to determine who was going to drive. His wife confirms the story. Oops! They turn their attention to your well-intentioned but hapless robot, which dutifully explains that it was merely acting on your instructions to fetch a drink. Incensed, the two insist that the police arrest you for assault.
Your defense attorney’s argument is simple: you didn’t do it, the robot did. You purchased the robot in good-faith reliance on its design and were using it in accordance with its intended purpose, so the company that sold you the robot should be held responsible for the incident.
But that company also has lawyers, and they successfully argue that they have met all reasonable standards of product liability and acted with due diligence and care. They point out that in millions of hours of use, this is the first event of its kind. From their perspective, this was simply a regrettable though unpredictable freak accident no different from an autonomous vehicle driving into a sinkhole that suddenly appears.
Perplexed at this liability gap, the judge looks for precedents. He finds one in the antebellum “Slave Codes” (as they were called) of the seventeenth and eighteenth centuries. Prior to the Civil War, various states and jurisdictions maintained a separate (and very unequal) body of laws to govern the treatment, legal status, and responsibilities of slaves. For the most part, these codes characterized slaves as property having limited rights and protections, particularly from their owners. While we certainly believe today that southern plantation slaves were conscious human beings, deserving of the same basic human rights as all others, it’s worth noting that not everyone at that time agreed with this assessment. Regardless, these codes inevitably held the slaves, not the owners, legally culpable for their crimes and subjected them to punishment.
The judge in this case sees a parallel between the status of a slave—who is legal “property” but is also capable of making his or her own independent decisions—and your robot. He decides that the appropriate punishment in this case is that the robot’s memory will be erased, to expunge its purse-snatching experience, and, as reparation for the crime, the robot will be consigned to the injured party’s custody for a period of twelve months.
The victim of the crime feels this is an acceptable resolution and will be happy to have a free, obedient servant for the next year. You are unhappy that you will temporarily lose the use of your robot and then have to retrain it, but it beats going to prison for assault.
And thus begins a new trail of precedents and body of law.
To recap, there’s no requirement in our laws that a moral agent be human or conscious, as the BP Deepwater Horizon case demonstrates. The relevant entity must merely be capable of recognizing the moral consequences of its actions and be able to act independently. Recall that synthetic intellects are commonly equipped with machine learning programs that develop unique internal representations based on the examples in the training set. I use this pile of jargon to avoid the danger inherent in using anthropomorphic language, but only because we don’t yet have the common words to describe these concepts any other way. Otherwise, I would simply say that synthetic intellects think and act based on their own experience, which in this case your robot clearly did. It just happened to be wrong. It may have been acting as your legal agent, but since you didn’t know what it was doing, even as its principal you aren’t responsible—it is.
There’s only one problem. If you accept that a synthetic intellect can commit a crime, how on earth do you discipline it? The judge in this case effectively punished the robot’s owner and compensated the victim, but did he mete out justice to the robot?
For guidance, consider how corporations are treated. Obviously, you can’t punish a corporation the same way you can a human. You can’t sentence a corporation to ten years in prison or take away its right to vote. In the words of Edward Thurlow, lord chancellor of England at the turn of the nineteenth century, “Did you ever expect a corporation to have a conscience, when it has no soul to be damned, and no body to be kicked?”
[fortune-brightcove videoid=4400725674001]
The key here is that humans, corporations, and synthetic intellects all have one thing in common: a purpose or goal. (At least within the context of the crime.) A human may commit a crime for a variety of reasons, such as for material gain, to stay out of prison (paradoxically), or to eliminate a romantic competitor. And the punishments we mete out relate to those goals. We may deprive the perpetrator of life (capital punishment), liberty (incarceration), or the ability to pursue happiness (a restraining order, for instance).
When corporations commit crimes, we don’t lock them away. Instead, we levy fines. Because the goal of a corporation is to make money, at least most of the time, this is a significant deterrent to bad behavior. We can also void contracts, exclude it from markets, or make its actions subject to external oversight, as is sometimes the case in antitrust litigation. In the extreme, we can deprive it of life (that is, close it down).
So we’ve already accepted the concept that not all perpetrators should suffer the same consequences. Not only should the punishment fit the crime, the punishment should fit the criminal. Punishing a synthetic intellect requires interfering with its ability to achieve its goals. This may not have an emotional impact as it might on a human, but it does serve important purposes of our legal system—deterrence and rehabilitation. A synthetic intellect, rationally programmed to pursue its goals, will alter its behavior to achieve its objectives when it encounters obstacles. This may be as simple as seeing examples of other instances of itself held to account for mistakes.
Note that, in contrast to most mass-produced artifacts, instances of synthetic intellects need not be equivalent, for the same reason that identical twins are not the same person. Each may learn from its own unique experiences and draw its own idiosyncratic conclusions, as our fictional robot did in the assault case.
Excerpted from Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence by Jerry Kaplan, published by Yale University Press. Copyright © 2015 by Jerry Kaplan. Reprinted by permission of the publisher.