In my previous blog, I wrote about a practical implication on how Artificial Intelligence (AI) could be a substitute for personal trainers in the personal health industry. In this blog, I want to talk about AI again, but now on the debate concerning accountability.
Uber started testing self-driving cars in March 2018 (The Guardian, 2018). Uber promised that the future of transportation would be self-driving cars, but then the car did not recognize a pedestrian crossing the street and the car killed her (The Guardian, 2018). That lit up the debate on accountability, because who was responsible for the death of the woman? Is it the engineer of the algorithm on which the car operates? Is it the company who decided to produce and retail the cars? Is it the person in the driving seat, even though this person is not driving (Bogart, 2017)?
I would like to share my thoughts on the matter. In politics there is a concept called ‘ministerial responsibility’. This concept holds that even if politicians do not work on a project for example, they are responsible and accountable for the performance of their civil servants (Hague & Harrop, 2013).
A same principle could be used self-driving cars. If a person decides to travel with a self-driving car, they accept the full responsibility for what the car does. The car should not be viewed as an autonomous vehicle, but rather as a tool that can help you in your transportation. This is comparable to the ministerial responsibility, because even if you don’t fully know the probabilities of something happening, the decision of using the self-driving car is the moment a person takes on the responsibility. A minister could also do all the things in his portfolio himself. By giving it to his apparatus, he trusts others with the production of the project, but he remains responsible and accountable himself (Hague & Harrop, 2013).
I am curious to hear what you think on the matter and if I need to elaborate a bit more on anything in the comments.
Sources:
Bogart, N. (2017). Who is responsible when a self-driving car crashes? Insurance companies aren’t sure yet. [Online] Accessed through: [https://globalnews.ca/news/3270429/self-driving-cars-insurance-liability/] on 8 October 2018.
Hague, R. and Harrop, M. (2013). Comparative Government and politics: an introduction. Palgrave: Macmillan.
Levin, S. & Wong, J.C. (2018). Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian. [Online] Accessed through: [https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe] On 7 October 2018.
Very interesting! In my opinion, accountability in AI is one of the most difficult issues/debates. Also in an AI summit by Microsoft, which I attended last week, a lot of time was spent on the debate of accountability and responsibility. I can see your point of ‘ministerial responsibility’ in the case of a self driving car. However, what happens if someone got killed by autonomous weapons – another AI driven development? In my opinion, if you apply the case of ‘ministerial responsibility’, it should be the CIO who is ultimately responsible for mistakes in the AI context – not the user. As explained through the autonomous weapon example, it is not always the user who can ultimately decide on faith. I’m interested to hear what you think of this perspective though!