• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
Commentary

The biggest ethical dilemma with driverless cars

By
Bill Buchanan
Down Arrow Button Icon
By
Bill Buchanan
Down Arrow Button Icon
October 29, 2015, 10:45 AM ET
Gov. Brown Signs Legislation At Google HQ That Allows Testing Of Autonomous Vehicles
MOUNTAIN VIEW, CA - SEPTEMBER 25: A Google self-driving car is displayed at the Google headquarters on September 25, 2012 in Mountain View, California. California Gov. Jerry Brown signed State Senate Bill 1298 that allows driverless cars to operate on public roads for testing purposes. The bill also calls for the Department of Motor Vehicles to adopt regulations that govern licensing, bonding, testing and operation of the driverless vehicles before January 2015. (Photo by Justin Sullivan/Getty Images)Photograph by Justin Sullivan — Getty Images

We make decisions every day based on risk – perhaps running across a road to catch a bus if the road is quiet, but not if it’s busy. Sometimes these decisions must be made in an instant, in the face of dire circumstances: a child runs out in front of your car, but there are other dangers to either side, say a cat and a cliff. How do you decide? Do you risk your own safety to protect that of others?

Now that self-driving cars are here and with no quick or sure way of overriding the controls – or even none at all – car manufacturers are faced with an algorithmic ethical dilemma. On-board computers in cars are already parking for us, driving on cruise control, and could take control in safety-critical situations. But that means they will be faced with the difficult choices that sometimes face humans.

How to program a computer’s ethical calculus?

  • Calculate the lowest number of injuries for each possible outcome, and take that route. Every living instance would be treated the same.
  • Calculate the lowest number of injuries for children for each possible outcome, and take that route.
  • Allocate values of 20 for each human, four for a cat, two for a dog, and one for a horse. Then calculate the total score for each in the impact, and take the route with the lowest score. So a big group of dogs would rank more highly than two cats, and the car would react to save the dogs.

What if the car also included its driver and passengers in this assessment, with the implication that sometimes those outside the car would score more highly than those within it? Who would willingly climb aboard a car programmed to sacrifice them if needs be?

A recent study by Jean-Francois Bonnefon from the Toulouse School of Economics in France suggested that there’s no right or wrong answer to these questions. The research used several hundred workers found through Amazon’s Mechanical Turk to analyze viewpoints on whether one or more pedestrians could be saved when a car swerves and hits a barrier, killing the driver. Then they varied the number of pedestrians who could be saved. Bonnefon found that most people agreed with the principle of programming cars to minimize death toll, but when it came to the exact details of the scenarios they were less certain. They were keen for others to use self-driving cars, but less keen themselves. So people often feel a utilitarian instinct to save the lives of others and sacrifice the car’s occupant, except when that occupant is them.

Intelligent machines
Science fiction writers have had plenty of leash to write about robots taking over the world (Terminator and many others), or where everything that’s said is recorded and analyzed (such as in Orwell’s 1984). It’s taken a while to reach this point, but many staples of science fiction are in the process of becoming mainstream science and technology. The internet and cloud computing have provided the platform upon which quantum leaps of progress are made, showcasing artificial intelligence against the human.

In Stanley Kubrick’s seminal film 2001: A Space Odyssey, we see hints of a future, where computers make decisions on the priorities of their mission, with the ship’s computer HAL saying: “This mission is too important for me to allow you to jeopardize it”. Machine intelligence is appearing in our devices, from phones to cars. Intel predicts that there will be 152 million connected cars by 2020, generating over 11 petabytes of data every year – enough to fill more than 40,000 250 GB hard disks. How intelligent? As Intel puts it, (almost) as smart as you. Cars will share and analyze a range data in order to make decisions on the move. It’s true enough that in most cases driverless cars are likely to be safer than humans, but it’s the outliers that we’re concerned with.

The author Isaac Asimov’s famous three laws of robotics proposed how future devices will cope with the need to make decisions in dangerous circumstances.

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

He even added a more fundamental “0th law” preceding the others:

  • A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Asimov did not tackle our ethical dilemma of the car crash, but with greater sensors to gather data, more sources of data to draw from, and greater processing power, the decision to act is reduced to a cold act of data analysis.

Of course software is notoriously buggy. What havoc could malicious actors who compromise these systems wreak? And what happens at the point that machine intelligence takes control from the human? Will it be right to do so? Could a future buyer purchase programmable ethical options with which to customize their car? The artificial intelligence equivalent of a bumper sticker that says “I break for nobody”? In which case, how would you know how cars were likely to act – and would you climb aboard if you did?

Then there are the legal issues. What if a car could have intervened to save lives but didn’t? Or if it ran people down deliberately based on its ethical calculus? This is the responsibility we bear as humans when we drive a car, but machines follow orders, so who (or what) carries the responsibility for a decision? As we see with improving face recognition in smartphones, airport monitors and even on Facebook, it’s not too difficult for a computer to identify objects, quickly calculate the consequences based on car speed and road conditions in order to calculate a set of outcomes, pick one, and act. And when it does so, it’s unlikely you’ll have an choice in the matter.

Bill Buchanan is head of the center for distributed computing, networks and security at Edinburgh Napier University. This article originally appeared on The Conversation. The Conversation

About the Author
By Bill Buchanan
See full bioRight Arrow Button Icon

Latest in Commentary

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.


Latest in Commentary

layoffs
CommentaryLayoffs
The AI layoff wave is just beginning — and it’s by design
By Kevin OakesDecember 17, 2025
13 hours ago
Norbert Jung
Commentary
Factory 2030 runs on more than code. As a CEO, I see the power of agentic AI—and the trust gap that we must close
By Norbert JungDecember 17, 2025
16 hours ago
TD Jakes
CommentaryReligion
To heal a divided nation, America’s next chapter must rediscover a common unity
By T.D. JakesDecember 16, 2025
2 days ago
tree
CommentaryInflation
Colorado is suffering from Christmas Tree inflation because Denver imports most of them—from North Carolina and the Pacific Northwest
By Ali Besharat and The ConversationDecember 16, 2025
2 days ago
Charles Lamanna
CommentaryMicrosoft
I lead Microsoft’s enterprise AI agent strategy. Here’s what every company should know about how agents will rewrite work
By Charles LamannaDecember 15, 2025
3 days ago
Julian Braithwaite is the Director General of the International Alliance for Responsible Drinking
CommentaryProductivity
Gen Z is drinking 20% less than Millennials. Productivity is rising. Coincidence? Not quite
By Julian BraithwaiteDecember 13, 2025
5 days ago

Most Popular

placeholder alt text
Economy
America's $38 trillion national debt 'exacerbates generational imbalances' with Gen Z and millennials paying the price, warns think tank
By Eleanor PringleDecember 16, 2025
2 days ago
placeholder alt text
Success
As millions of Gen Zers face unemployment, McDonald's CEO dishes out some tough love career advice for navigating the market: ‘You've got to make things happen for yourself’
By Preston ForeDecember 16, 2025
1 day ago
placeholder alt text
Innovation
An MIT roboticist who cofounded bankrupt Roomba maker iRobot says Elon Musk's vision of humanoid robot assistants is 'pure fantasy thinking'
By Marco Quiroz-GutierrezDecember 16, 2025
2 days ago
placeholder alt text
AI
IBM, AWS veteran says 90% of your employees are stuck in first gear with AI, just asking it to ‘write their mean email in a slightly more polite way’
By Marco Quiroz-GutierrezDecember 16, 2025
1 day ago
placeholder alt text
Economy
The $38 trillion national debt is to blame for over $1 trillion in annual interest payments from here on out, CRFB says
By Nick LichtenbergDecember 17, 2025
10 hours ago
placeholder alt text
Politics
Exclusive: After citations against Elon Musk’s Boring Company were suddenly withdrawn, federal regulators are now investigating Nevada OSHA
By Jessica MathewsDecember 16, 2025
1 day ago