Why self-driving cars must be designed to kill

9

October

2016

5/5 (1)

Would you buy a car that sacrifices the driver on occasion, or a car that preserves the driver on all occasions?

Researchers in a recent study published in Science (2016) “The social dilemma of autonomous vehicles” asked exactly that, and the results were quite interesting, raising implications for the programming of self-driving vehicles and governmental regulation.

Driverless vehicles may hold the promise of saving millions of lives around the world. In 2015 in the US alone, 38,300 people were killed and around 4.4 million people sustained injuries due to vehicle related accidents. Most of the accidents happen because of human error and by replacing the human with a highly calculated robot, which does not get drunk or distracted and drives within speed limits, automated vehicles could prevent a large number (according to research, up to 90%) of accidents.

A major question arises in the rare occasion of a no-win scenario that a fatal accident can’t be avoided by the autonomous vehicle. For example, you’re riding along in your car, when suddenly a group of pedestrians appear in the way and the car must decide in a split-second whether to drive over the pedestrians or sacrifice the driver by swerving off the road towards a concrete block.

In the study with 2000 respondents, 76% thought that autonomous vehicles should be programmed to be utilitarian, meaning they should be programmed to save the most lives while sacrificing as few lives possible. In the case of the accident, the majority believed it to be moral to sacrifice the driver rather than a group of pedestrians.

However, when asked if they themselves would buy a car that is programmed to be utilitarian, a large majority of the respondents said they wouldn’t buy one and instead prefer to ride in autonomous vehicles that protect the driver and passengers at all costs. This is the dilemma: most people would like others to drive cars that minimize casualties, but everybody wants their own car to protect them at all costs.

Figuring out how to build ethical autonomous vehicles is one of the most difficult challenges in artificial intelligence today for car-manufacturers and governments. Governmental regulation could insist that all cars be programmed to save as many lives as possible, but then people would not be eager to adopt them. Therefore programming the cars to make the “right” moral decision may be one of the biggest impediments to autonomous vehicle adoption. But if manufacturers and buyers are given the choice of self-preservation above everything else, are they liable for the harmful consequences of the programmed decisions? For the time being, there is no clear-cut answer on how the cars should be designed in a no-win scenario like this, but we as a society need to address if we want to hand the responsibility of driving over to a computer.

Try out the http://moralmachine.mit.edu/ to test your responses compared to others on moral decisions made by self-driving cars.

What do you think? Write your opinions in comment section below.

Sources:

Bonnefon, Jean-François, Azim Shariff, and Iyad Rahwan. “The social dilemma of autonomous vehicles.” Science 352.6293 (2016): 1573-1576.

http://europe.newsweek.com/2015-brought-biggest-us-traffic-death-increase-50-years-427759?rm=eu

Please rate this

6 thoughts on “Why self-driving cars must be designed to kill”

  1. Dear Mikael, thank you for your interesting post! I had read a bit into this, and I think this dilemma is a really hard one. I remember it going even further, such as what if there are kids in the car and the pedestrains are older people? What would you do then?

    Our of curiosity, I tried the test in the link you posted. There, you can choose indeed how many people are killed, or even animals, as well as how old they are, how fit they look, and what job they have. Quite interestingly, I think this test also shows you more about yourself: it tells you whether you are more likely to kill an animal or human, or whether you prefer killing a larger person over a fit person (a bit mean to say it like that, but that is what the results show). I realized also that the test shows the cases of pedestrians not upholding the law: for example, if the light is green for the car, and a pedestrian still goes, what do you do?

    While I believe a lot of these questions remain unanswered, I also think the self-driving cars will become more and more important and take over part of society. Hence, it is good to think about the laws and regulations, and make them consistent across countries. Next to that, it is crucial to teach cars the safest option, and not let our biases become part of the technology.

  2. Thank you Mikael for this interesting blog post. I also wrote about the same topic. Furthermore, I would like to connect with your sentence “a society need to address if we want to hand the responsibility of driving over to a computer”. In my opinion, the whole project sounds to me very creepy. Just imagine if the car gets hacked. I wont be that confident walking around the streets. I also read about the moral decision making of car by scanning your human profile. In other words, in the case of an accident , the car will choose a person who broke the law in the past over a person who never did anything wrong. Hence, if someone already changed to good person still needs to be conscious because of his record. These were few thoughts that came to my mind, while I was writing about the “moral machine”.

    Best,

    PJ

  3. This is a fantastic article Mikael. It brought up a dilemma that I have never think of and of course there is no simple solution.

    You and some colleagues present an interesting analysis. I would like to contribute to this discussion by adding an additional point of view. I was wondering how many time would these scenarios occur? I believe the probability of these events is very low. However, it’s incredible the impact that it might have for the brand.

    Thank for sharing this excellent article.

  4. Hey Mikael,
    This is such an interesting topic! I especially like the link you provided. I tried it myself and I found it really hard to “choose” who to kill, which makes me realize how serious this issue is. I have to agree with Prabjot that the potential risk of the car being hacked really worries me. Also, I do not think that a car which is programmed to be utilitarian will ever be accepted by the public because, lets be honest, people are selfish. Also, I am pretty sure whoever get killed by the car, would definitely blame the car manufactures for design error even if the autonomous vehicles are set to be unbiased (if true) and be utilitarian. It could strongly damage the brand image of the manufactures and could brings many lawsuits.

    I am however curious how would the developers test this function if it does exist. I assume the car can distinguish human and other animals. How can they test this function in real life before launching? I do not know any existing technology has the possibility to decide who to kill.

    Let me know what you think 🙂

  5. Dear Mikael,

    Very nice read! This article really got me thinking about this topic, and I thought the video was a really interesting addition to the post.

    In your post you point out that these no-win scenarios where a fatal accident cannot be avoided by the autonomous vehicle, will not occur regularly. In my opinion, these dilemmas where the autonomous car has to decide who it is going to sacrifice, will still occur less frequently than accidents caused by non-autonomous cars.

    However, this statement leads us to the same question that you raised: are we willing to hand the responsibility of driving over to a computer, or do we prefer to have more control for ourselves with a higher chance of having an accident? I believe most of the human beings would choose the last option.

    I am convinced that the assistive technologies as software, sensors, cameras, radar and autopilot features as merging onto a highway, will become more important in the following years. However, drivers will still need to keep their hands on the wheel with these features, giving them the feeling they are in control.

    Reference:
    http://www.livescience.com/55273-first-self-driving-car-fatality.html

  6. Really interesting article Mikael.

    The self-driving cars have a little probability of causing an accident so personally i would be bothered by the utilitarian programming. Also there were famous cases of brands like Fords that to save some money killed many drivers or caused incidents and yet the company still exists.

    Furthermore we can see a trend of “wanting it easy”. By this I mean that when faced with decisions like, should I delete facebook or let it use all my information? Should I use Windows that tries to monopolize the market with often unlegal agreements or use open source software? Should I eat biological products , make my own ones or buy the cheap prooducts at the supermarkets full of pesticides? In this cases, technology or not, we always end up with our “rights” violated for an easy option. So I guess when we will decide to be on a car that drives us we will accept the consequences that the car brings along.

Leave a Reply

Your email address will not be published. Required fields are marked *