Can A Machine Make Ethical Decisions?

15

September

2018

No ratings yet.

Jeroen van Hoven, professor in the Ethics and Technology department of the TU Delft was one of the guests in the Dutch news programme Buitenhof in their broadcast of the 26th of August. One of the points he brought up was that with all the current technological developments like Artificial Intelligence, philosophers will play a big role in future discussions, as future machines might for example reach a state in which they could make decisions autonomously. I agree with him that we should think carefully about what kind of we decisions are appropriate to delegate to machines. For example, do we want weapon systems to use AI to recognize targets and make life or death decisions?

As these new systems are often called to be ‘autonomous’, some people are afraid of this kind of scenario’s becoming reality. It feels like we are losing control. However, in my opinion this fear is not entirely justified. Autonomy is a human characteristic, and it is does not correspond with the autonomy nowadays associated with computers and machines. Humans have a moral autonomy, which means that if we have moral objections against something, we can decide not to do it. For example, when a soldier has moral objections against fulfilling his given mission, he can decide not to execute it. A computer however, even with Artificial Intelligence, does not have this moral autonomy. It only has machine autonomy: it performs it’s given tasks, it responds to changing situations and makes decisions with little human oversight. This means that a computer is not capable of changing the role the programmer gave him. Brad Templeton, a software architect, once put it this way: “A robot will be truly autonomous when you instruct it to go to work and it decides to go to the beach instead”. In other words: we don’t have to be afraid of a machine autonomously making decisions which don’t correspond with what it was programmed to do in the first place.

Although autonomous systems will take over a big part of human decision making, this does not include ethical decisions. Ethical decisions, like life or death decisions, will in essence not be made by machines once they are in operation. They are made by human beings when programming them.

Please rate this

1 thought on “Can A Machine Make Ethical Decisions?”

  1. Hi Wouter, thanks for this interesting article! I agree with the points you make in your blog. While robots will get to higher levels of authonomy through machine learning and other developments, I think they won’t indeed get full moral authonomy. To consider your posed question ‘’For example, do we want weapon systems to use AI to recognize targets and make life or death decisions?’’: AI will recognize the targets, but won’t essentially make the life or death decision, because that is based on the input variables received from the developers. This brings topics to think further about: till what extent is a developer responsible for mistakes in the development that have fatal (but maybe unintended) outcomes? What amount of power should governments have? And how to deal with the increase of criminals using this kind of technologies? So it is a good thing that Jeroen van Hoven wants more discussion involving AI, but maybe he aims at the wrong part of the discussion.

Leave a Reply

Your email address will not be published. Required fields are marked *