The Morality of Artificial Intelligence

6

October

2020

5/5 (1)

Recently I read an interview with the German philosopher Richard David Precht about artificial intelligence and whether machines will ever be as intelligent as humans. He published a book this year entitled ‘Artificial Intelligence and the Meaning of Life’.

What I found interesting was the mixture of philosophy and technology. He states that artificial intelligence will have a major impact on society and is therefore relevant for philosophers.

In the interview I got to know the limits of artificial intelligence. Humans need fiction, for example, and live through their imagination in different times or worlds. This means that when we think about a memory or a dream, we live through emotions. For this reason, we as humans are able to have a moral compass. However, an artificial intelligence will never think of a time in the past or in another “world”.

Furthermore, the human species uses intelligence in situations where we do not know what to do. However, artificial intelligence is programmed and therefore knows from the outset what to do, even if it is able to find creative solutions within its program.

The life of a human being does not run within regular orbits. Human intelligence is able to adapt to situations based on emotions and a moral compass. Artificial intelligence, on the other hand, is programmed for specific situations and is not capable of acting on the basis of morality.

I think the most important argument on this topic is presented in the middle of the interview: ‘In situations where we make moral decisions, we react emotionally very strongly. And because artificial intelligence has no emotions, we cannot program it morally. What is really programmed does not behave morally because it has no freedom. I call a judgment moral only when I believe I have made a free decision. But if I cannot do that at all, then we are not dealing with moral decisions.’

Another restriction that Mr. Precht sees in relation to autonomous driving is the constitution in Germany, where it is written that human dignity is inviolable. So, an artificial intelligence, which must set human life off against each other, can never be allowed on the basis of the German constitution.
To be honest, I found the interview a little reassuring. The media is increasingly talking about a future in which robots can become malicious and artificial intelligence will overtake humans. The interview showed that this is still a long way off, if at all possible.

 

Reference: t3n (2020). Precht im Interview: Ewiges Leben in der Cloud? “Nein, danke!”. Available at: https://t3n.de/magazin/precht-im-interview-ewiges-leben-249667/ (Accessed: 06 October 2020)

Please rate this

1 thought on “The Morality of Artificial Intelligence”

  1. Thank you for sharing this Hendrik.

    Eventhough my German is decent, me not being a native speaker would likely have resulted in me missunderstanding parts of the points that mr. Precht is making. You being able to capture his points in English, allows me to grasp his concepts as well.

    What you mention in regards to the human need for fiction and fantasy is very interesting. This affirms the prediction that even as AI becomes more advanced and more widely deployed, human workers will remain needed for their creativity.

    In addition, I agree with your and Precht’s point on morality. Precht gives a good analogy of this concept. Since artificial intelligence has no emotions, we cannot program it with a moral. Freedom is required for moral decision making.

    In this lies a danger which is already relevant: as AI is taught based on human intellect, unwanted biases can enter the learning data and therefore into the AI. A possible example is of Human Resource AI’s who handle job applications, but with racial biases. These biases are based on the racial biases of their human counter parts, on the basis of which the AI was coded. I think the same could happen when trying to code a moral into a system.

    I hope that interviews like with mr. Precht increase awareness on the responsibility of humans in the development of AI and their moral applications.

Leave a Reply

Your email address will not be published. Required fields are marked *