Moral decisions in self-driving cars

11

September

2019

No ratings yet.

Self-driving cars are one of the most recurring debates nowadays. In 2024 it is estimated that self-driving will reach level 5, which is the same as saying that the vehicle will be completely in charge of driving. This allows a couple of years for scientists to develop and define the best protocols for autonomous driving.

Most people agree that AI should behave like a human’s brain in case of danger. However, it is still unclear how the human brain acts when you are put in a controversial situation, when deaths can occur. So, how will scientists develop an algorithm for this?

Wolff, Gomez-Pilar, Nakao and Northoff (2019) investigated in their report the activity of neurons while undertaking moral decisions. To investigate this moral dilemma the researchers based their experiment on Philippa Foot’s (1967) “The trolley problem”. Briefly, there is a moral dilemma between choosing to save 5 people and kill 1 person or vice versa when a tram is out of control running through the rail. Wolff et al. (2019) tried to determine what are the processes the human brain takes to answer this dilemma. For this they used an electroencephalograph (EEG), placing electrodes around the head of 41 participants. The participants were posed several times the trolley dilemma, varying the number of people; they also undertook a situation where no moral dilemma arose to be able to compare it to the rest of the trials.
The findings of this study showed that neural activity is different when placed in a controversial situation which implies a moral dilemma. The researchers found that slower brainwaves – delta, theta and alfa – have the highest impact when taking moral decisions (Wolff et al., 2019).

These findings allow the scientific community to imitate in smart cars the neural activity that takes place in human brains when faced when moral dilemmas. However, will it really be successful? I believe most parents would rather choose saving their children, and giving them the opportunity to grow and live their life than saving the largest number of people. Will cars become smart enough to read human’s feelings and act as a human driving? For the moment, I believe the answer is no. Even though smart cars facilitate daily driving and offer a better traveling experience, I do think it will be quite hard for cars to take moral decisions as you would.

Wolff, A., Gomez-Pilar, J., Nakao, T. and Northoff, G. (2019). Interindividual neural differences in moral decision-making are mediated by alpha power and delta/theta phase coherence. Scientific Reports, 9(1).

Philippa Foot, “The Problem of Abortion and the Doctrine of the Double Effect” in Virtues and Vices (Oxford: Basil Blackwell, 1978) (originally appeared in the Oxford Review, Number 5, 1967.)

SINC, A. (2019). Cómo tomarán decisiones morales los vehículos autónomos. [online] EL PAÍS. Available at: https://elpais.com/tecnologia/2019/08/29/actualidad/1567075686_288176.html [Accessed 11 Sep. 2019].

Cdn.cbtnews.com. (2019). [online] Available at: https://cdn.cbtnews.com/wp-content/uploads/2017/11/shutterstock_631212983.jpg [Accessed 11 Sep. 2019].

Please rate this

2 thoughts on “Moral decisions in self-driving cars”

  1. Hi Sarah,

    Thank you for this interesting perspective on self-driving cars. For me, it is hard to connect the research to actually making the best decision in this kind of situation, because everyone will make the decision based on their own situation rather than everyone will take the same decision based on the situation. Therefore, will your vehicle be based on your life (and decisions?) or will your vehicle be based on what is legally, economically, socially the best decision?

  2. Hi Sarah,

    Thank you for this blog post. It is a very interesting topic indeed, mostly because of the fact that we as humans do not accept that robots make mistakes. 20 people killed by a human would be seen as less bad than 10 people killed by a self-driving car. One can expect general explanations like: “We just chose robots to perfect our actions, didn’t we? Robots must operate faultless, that’s what they’re made for.”. MIT started an important research for gathering a human perspective on moral decisions made my machine intelligence, like self-driving cars (Awad et al., 2018). You can do the test on http://moralmachine.mit.edu/. The major problem we have here is that we have never in our history allowed robots to decide autonomously about life and death without human supervision. However, I think that people will become more forgivingly in the future when they are more used to this concept and the presence of robots in our normal lives. On the other hand, I also think that it is going to be hard to complete remove this mindset out of people’s heads.

    Source:
    Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F. and Rahwan, I. (2018). The Moral Machine experiment. Nature, [online] 563(7729), pp.59–64. Available at: https://www.nature.com/articles/s41586-018-0637-6 [Accessed 22 Sep. 2019].

Leave a Reply

Your email address will not be published. Required fields are marked *