Licence to Kill, Autonomous Cars and Road Accidents

23

September

2017

5/5 (2)

Suppose it’s the year 2025, you’re driving down the highway in your all new Tesla Model 7. Suddenly a dog steps in front of your car. Time stops and you are to decide to: wander of the road hitting a tree injuring yourself, hitting the brakes risking a multiple-car accident, or running over Teddy, killing man’s best friend?

A recent publication by the German Federal Ministry of Transport and Digital Infrastructure’s Ethic Commission, featured a set of 20 points/guidelines whom could shape the future of automated driving systems. The ethics commission, consisting of 14 academics and experts from the a multitude of disciplines such as ethics, law, and technology, came up with guidelines for automated transport systems. These guidelines, meant for policymakers and lawmakers, set out requirements in terms of safety, human dignity, personal freedom of choice and data autonomy.

Key elements of this report include the primary purpose for partly and fully automated transport systems: “To improve safety, increase mobility opportunities and to make further benefits possible” for which individuals themselves are responsible. In addition the commission concludes that “the protection of individuals takes precedes over all utilitarian considerations”. Meaning that automated transporting systems are only justifiable if they lead to a positive balance of risk, in comparison to human drivers. In simpler terms, automated transporting systems should only be used when it is proven to cause fewer accidents than human drivers (this does not mean that they need to be perfect)

Most strikingly, the guidelines report that “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited. It is also prohibited to offset victims against one another.”

Going back to the year 2025, suppose that instead of Teddy a pregnant lady is crossing the street. What would you do? Would you distinguish between ‘personal features’, should an automated car distinguish?

References:
Bmvi.de. (2017). BMVI – Dobrindt: First guidelines in the world for self-driving computers. [online] Available at: https://www.bmvi.de/SharedDocs/EN/PressRelease/2017/084-ethic-commission-report-automated-driving.html [Accessed 23 Sep. 2017].
ETHICS COMMISSION AUTOMATED AND CONNECTED DRIVING. (2017). [pdf] Federal Ministry of Transport and Digital Infrastructure. Available at: http://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.pdf?__blob=publicationFile [Accessed 23 Sep. 2017].
ETHIK-KOMMISSION AUTOMATISIERTES UND VERNETZTES FAHREN. (2017). [pdf] Bundesministerium für Verkehr und digitale Infrastruktur. Available at: http://www.bmvi.de/SharedDocs/DE/Publikationen/G/bericht-der-ethik-kommission.pdf?__blob=publicationFile [Accessed 23 Sep. 2017].

Please rate this

6 thoughts on “Licence to Kill, Autonomous Cars and Road Accidents”

  1. You write “Meaning that automated transporting systems are only justifiable if they lead to a positive balance of risk, in comparison to human drivers.” – but that is only true in a theoretical way.

    Car crashes happen all the time (on average, more than 3000 humans die per day – yes, per day – from car crashes, I believe). Driverless cars – at least, those somewhat in use, such as Google’s – are already far safer than that. And yet there was an uproar half a year ago when one of Google’s cars crashed (for the first time, I believe).

    See, it is not that driverless cars must be safer than driven cars. It is that driverless cars must be perfect – an impossible standard to achieve, of course – in the eyes of many people. It makes no rational sense at all. But as they become more common, I guarantee you that there will be outrage over driverless cars causing accidents, with people claiming that humans would have never let that happen, and wholly ignoring statistics that show that driverless cars are an order of magnitude safer.

    As with all these new things, laws trail behind technology, so it will be interesting to see what eventually ends up happening. I predict driverless cars to become commonplace, serving as a new kind of public transportation, with fewer and fewer people rallying against them. But initially, there will be some interesting opinion pieces and legal battles to watch.

    As far as ethics go – I suppose calculating the expected life expectancy of a person and killing the one with the lower life expectancy is a brutal if logical calculation to make, but that relies on so many factors that a car couldn’t know (at least for now; Facebook and Google and all are well on their way to aggregating so much information).

    1. Hi Martijn, thank you for your post.You are raising some very interesting questions!

      As you and Roy already mentioned, the expectations of self-driving cars are really high. They have to pass high criteria standards and have to be perfect. Even though this industry isn’t safe at all -with so many casualties every year in traffic- I think its good to have these extremely high standards.

      I think in our society, a human mistake is ‘acceptable’; accidents are a possibility here.
      But if a machine or car makes this same human mistake, this would never be acceptable. This would mean that there must be a mistake in the cars design or software, that could have been avoided. We are more strict against robots and Artificial Intelligence, but I think this is okay because this would put pressure on the mechanics to really perfection the cars. A mistake could be catastrophic for them, so they will make sure no accidents happen. In my opinion, self-driving cars must not go on the Streets again if they ever cause an accident. I also think this perfect self-driving car is possible, or will be very soon.

    2. You might be right when quoting statistics that show that autonomous cars are saver than normal cars. Other studies emphasize the opportunities autonomous cars have when it comes to making driving and transportation more efficient. However, one serious issue is that laws and regulations in many countries as well as human ethics are not designed for autonomous cars.

      The underlying question is about who is accountable in the case of an accident. So who accepts liability in the case of an accident? Human beings are obliged to be held responsible which authorizes humans to decide between right and wrong. An autonomic vehicle deprives the driver of this freedom of doing the wrong thing according to the programmed rule. Until now cars cannot be hold accountable in a legal sense i.e be convicted to a sentence.

      In practice: who should the car owner’s insurance company hold accountable in the case of an accident? Even if some car manufacturers are partly liable, how should be determined which employee can be hold accountable: the CEO, a manager, a programmer? And when it comes to the driver of the car: is it ethical to just hand over responsibility to a machine that decides between life and death of a human being?

      1. Just to go on a juridical side track here: I think that another aspect of this question — that one should not overlook — are the damages caused by faulty products, like a self-driving car causing havoc due to a programming error.

        Luckily, this question regarding liability in the case of faulty products is already partly answered by EU Directive 85/374/EEC, put into place some 30 years ago. As is standard practice with EU Directives, these rules have been implemented in Dutch law as well (articles 6:185 – 6:193 BW).

        This states that the producer is always (except in some specific cases) liable for damage caused by a faulty / defective product. The producer may even be liable without fault or negligence on his part!

        This doesn’t only apply to self-driving cars, it applies to all damages caused by any product (except some specific products) – up to 10 years after putting the product on the market!

        It will surely be interesting to see how lawmakers will implement rules regarding ethical/moral decisions…


        References:
        – Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products
        – Dutch Civil Code, Book 6, Title 3, Part 3 (articles 6:185 – 6:193)

  2. Isn’t the risk of a human/animal crossing the street something we are not able to control, as long as humans/animals don’t always act in a predictable way? The main thing that autonomous driving cars are able to avoid is collisions between vehicles. If a human/animal decides to cross the street unexpectedly and the car is physically unable to brake in that short period of time, in my opinion this is not a downside of an autonomous driving car. It might be a downside of vehicles going over 50 MPH. I think the main task of the vehicle should be to keep the passengers safe and therefore running over Teddy and not hitting the tree.

Leave a Reply

Your email address will not be published. Required fields are marked *