A robot will kill the wrong one

21

October

2016

No ratings yet.

Imagine, you drive a car. The trip is going well and it looks like you’re coming home safely. Until suddenly a child is running on the road. Stopping is not possible. Immediately you throw the steering wheel, drive off the dike and ends up in the water. Electric windows block your right and you forget where the hammer was located to break out. A casual person nearby can just save you on time.
It is a scenario that can happen to anyone. You choose in a fraction of a second for the life of the child and throws the steering wheel, risking your own life. That’s a human reaction. Because when you drive you can be sure that someone is going to die; the child. There are few people who choose for this option.

Still going to change this. When we see self-driving cars on the road in ten years, there will be other choices in traffic. Because they are made by someone else, namely a robot. And those choices may substantially differ from those of men.
Mercedes-Benz presented last week at the Paris AutoShow the self-driving car, a robot so, to always opt for the safety of the occupants. That would mean in this example, the car will make an accidented with the child, because stopping – what a self-driving car basically always does when it encounters ‘an obstacle’ – is not a option.

Mercedes-Benz received criticism at the second, while the carmaker (commercially) has no choice. The company chooses the life of the customer. Simply because no one buys a car that puts the safety of other road users, as revealed earlier this year even though from an article in the scientific journal Science. For those who drive dangerously aware you do not have to risk your life. Which again shows how difficult it is causing a computer to make ethical choices.

Ethical choices can never be left to a robot.  People can make these kinds of choices. For example, a robot of two roads will always choose the path where he kills the fewest people, while a man seeing preparing two men with a bomb belt for themselves, is another choice when a playing child jumps to the car.

The question here is how we humans let keep control, while the control of the steering wheel control will be the responsibility of the machine. Merecedes-Benz is the first car company that dares to put their finger on the sore spot. It is time that the others also mix herein. Otherwise it remains a future car with just a trendy thing, and we’ll be there until ten years after making the essential choices which we can not live by. And then it’s too late.

 

Source:
www.wsj.com/articles/can-we-create-an-ethicalrobot-1437758519, retrieved on October 21th 2016
http://www.nature.com/news/machine-ethics-the-robot-s-dilemma-1.17881, retrieved on October 21th 2016

Please rate this

2 thoughts on “A robot will kill the wrong one”

  1. Dear Amir, thank you for your blog. While self-driving cars are often said to be safer than cars driven by humans, when a human ”jumps” in front of a car, or a biker thinks he can still go in front, the car has to make a choice, as you said. Thus, while self-driving cars might be safer, they are not the only ones taking part in traffic, as there will still be pedestrians, bikers, motorbikers, trams, etc. Hence, by making cars fully automatic, we take out indeed a part where a human might make a better choice. I wonder – isn’t there an option that a human would still be able to make such a decision, even if the car is driving itself? It would be a lot harder, as you do not need to pay as much attention (since you are not the driver, but the car is), but it would allow humans to make their own choice in situations of life and death. However, I guess that also takes away part of the idea of self driving cars. I think for the moment, there are still a lot of ethical questions to be answered about self-driving cars, hence, I am not sure if they will take over our roads anytime soon (at least not before the most crucial questions are answered).

  2. Hi Amir,
    Thank you for your post, your point about ethics made me realize that maybe self-driving cars aren’t as ready for the road as we all think they are. Self-driving cars aren’t capable of making those rational choices like humans, but I personally have no idea for how this could be changed. There’s a website called Moral Machine, which is a platform that collects a human’s input on moral decisions made by machine intelligence (a self-driving car on this platform). It gives you two scenarios and asks which scenario you would choose (e.g. a self-driving car arriving at a green stoplight but people are illegally crossing). The end results show your attitude towards saving lives, protecting passengers, upholding laws, avoiding intervention, etc. I think it’s worth taking a look at, because it shows how the choices between a robotic car and human differ.

    http://moralmachine.mit.edu/

Leave a Reply

Your email address will not be published. Required fields are marked *