The problems of teaching autonomous cars (or AI) ethics

25

September

2017

AI will cross with humans and human lives. However, what will we teach AI how to handle in these situations?

No ratings yet.

Artificial intelligence (AI) has become a long way. Using AI, computers can already beat human chess players. Autonomous driving cars are becoming more and more a reality right now. AI is being implemented in chatbots for a better customer experience. Some even state that chatbots may provide a better and more humane experience, as chatbots don’t know what bad days are or annoying customers. All in all, AI is becoming more and more human. However, what kind of “humans” are they becoming? AI is learning based on the input that we give it, and it processes this data using machine learning and the output that we prioritize, e.g. this is how Google search know that you mean “Google” instead of your typo “Goolge”. What does this mean for the AI? What kind of AI are we creating? Let’s delve into this topic using the example of autonomous driving.

Autonomous cars can do a lot right now. We can implement it with many camera’s, so they can “see” better than any human driver can possibly see. With the intelligence that they already possess, they can drive even better. Accidents will also be lowered dramatically. However, accidents can always happen. This probability is something that we cannot eliminate. Not even with the amount of camera’s on a driverless car. However, if an accident will occur, what should the driverless car do? Using only programming, we can create an algorithm that the car will choose for the minimum damage. However, what do we define as the minimum damage? And is it also the most humane way?

The arising problem has a great similarity with the Trolley problem:

 

You can imagine that it is very difficult to “teach” the car what it should do, as people have different levels of morality and what they see as morally correct or wrong. Some may preach that killing 5 adults is better than killing 1 child; and others will preach that killing 1 murderer is better than killing 1 doctor. Thus, teaching AI, what they should prioritize, will create a dilemma. Autonomous driving, or other AI that will cross with human lives, will definitely come. However, the question that arises is how are we going to teach them who to prioritize? The ones who program the machines, will they also be the teachers? Maybe, or maybe not. Are we going to crowdsource a public opinion for morality and implement it in machines? However, is the public opinion also the correct one, or just the opinion of who screams the loudest?

At the moment, we are on a crossroad. Technology has come a long way, and right now we need to decide which way to go. Our next step will set the tone for the next generation, so what will it be?

What are your opinions on this dilemma? And how would you solve it? Join the discussion by posting your comment below.  

Please rate this

4 thoughts on “The problems of teaching autonomous cars (or AI) ethics”

  1. Hi!

    Interesting blog post you wrote, however, I was wondering why you think that accidents will still happen with autonomous vehicles? Do you mean an accident with an autonomous vehicle and a non-autonomous vehicle or two autonomous vehicles?

    Thanks!

    1. Thanks, Tara!

      I believe in the ideal situation the technology will be so advanced that it can curb any accident. However, just like we learned in DBA, any and always is just not possible; maybe the driver less cars can drive without accidents 99% of the time, but there will always be that 1%.
      I believe both can be the case. There can always be the problem of a miscalculation or a learning path that has not been covered. And of course, there are possible still humans on the road, either as drivers or as passerby’s. And human factor is far from flawless, and obviously the unpredictability of people (even though, AI is doing a great job at that, right now!)

  2. Beside the problems of teaching autonomous cars ethics, privacy issues are a critical problem. These issues include the prospect of being tracked everywhere, thus providing government with an excellent big data viewpoint to initiate totalitarian control. Autonomous cars also pose problems with hacking. The CIA, for instance, has looked into hacking connected vehicles since 2014 (Felton, 2017). I think that safeguards need to be introduced to secure privacy, and the decision of those who want to remain driving standard cars needs to be respected to protect our right to autonomy, privacy and liberty.

    Reference:
    Felton, R. (2017) The CIA Has Looked Into Hacking Connected Vehicles Since 2014: Wikileaks. [Online] Jalopnik. Available at: https://jalopnik.com/the-cia-has-looked-into-hacking-connected-vehicles-sinc-1793052458 [Accessed 20 Oct. 2017].

  3. Hi,

    Very interesting blog post! I haven’t though about this issue on an ethical point of view until now, this is indeed an impossible question to answer. What I have thought about though before, regarding this problem, was from a responsibility point of view. With the rise of self-driving cars, it raises the question “If you do get into an accident, who’s fault is it really?” Is it the driver’s fault, because he/ she has the duty to pay attention even if it is self-driving? Or are the manufacturers to blame? If so, is it actually the car manufacturers fault or the company that sold this technology to them? Maybe even fault the programmer? Surely the programmers will argue against this and blame someone else. As you can see there is a lot of ambiguity regarding the issue of accountability, hence this is partly why complete 100% autonomous cars are not being offered yet. This technology demands adaptions to the law, or else it will become one big mess once a devastating accident does occur…

Leave a Reply

Your email address will not be published. Required fields are marked *