With Uber, Tesla, Google, and other mobility companies investing in autonomous cars that don’t need anyone behind the wheel from A to B, it is imminent that accidents will happen with these systems. (Interesting Engineering, 2019). However, what do we need to keep in mind when we read the news about (semi)autonomous vehicles being involved in accidents?
Having someone, or something, literally take the wheel usually leaves us humans with a helpless feeling. Helplessness is definitely not part of feeling safe and even less part of pushing humans to accept technology. Especially technology that can easily kill either them or other humans.
So, whenever news arises that, for instance, Tesla’s Autopilot was involved in an accident, news channel features, online forums and many more media outlets light ablaze with the questioning of autonomous vehicles and whether they should be banned or not (Forbes, 2019). These articles and forums show that people are scared, maybe even rightly so, people are dying because of autonomous cars… right?
However, we must consider several things before conclusions are drawn. Firstly, the bias that can easily arise with news of this nature. Second, whether having autonomous cars are per definition ‘fault’ or ‘unsuited’ when they are involved in accidents.
Bias: One should always be cautionary when drawing conclusions on data presented by others. This is also the case for autonomous cars. Due to the fact that the safety of autonomous cars is only a topic of conversation when it goes wrong (e.g. a car crash) we are biased towards thinking it is dangerous (Wikipedia, 2019). This is due to how the news works, it is way less detectable and less newsworthy when an autonomous is NOT involved in an accident or narrowly avoided one.
Second, even if autonomous cars are sometimes involved in accidents, this does not immediately mean that they are therefore unsuited to replace human drives. The reason is that they don’t have to be perfect. They don’t have to have a 0% crash rate, they only need to have a lower crash rate than humans in order to be ‘worthy’ to take the wheel.
In conclusion, I am definitely not saying that autonomous driving is the answer and safe enough to implement. However, I do recommend that we take in numbers not without thinking and quickly judging, but analyze what the numbers mean and what they represent.
https://interestingengineering.com/are-we-programming-killer-cars-the-ethics-of-autonomous-vehicles
https://en.wikipedia.org/wiki/Availability_heuristic
Hi Willem!
I certainly think that this topic is worth discussing! Indeed, what makes an accident involving self-driving cars any different from accidents happening on the roads every day, right? However, given its novelty, anything related to autonomous driving is under a spotlight, especially when such a noteworthy incident as a loss of human life happens. And I certainly agree that self-driving cars cannot and maybe even should not have a 0% crash rate to be implemented. What type of transportation – car, plane, train or bike – does not get involved into an accident?
So we should accept a very hard truth: Self-driving cars will kill people! However, as you pointed out we should not consider this fact standalone. Next to it we have to realize that humans are very bad at controlling their own vehicles, up to a point that it makes a car driven by a human as the most dangerous means of transportation. However, even then, our laws for driving and qualifying for a driving license are very lax. So we have to face a moral question: How safe should self-driving cars be and who should be kept liable when accidents do happen?