According to World Health Organization’s report on road traffic injuries, there are approximately 1.35 million deaths every year caused by road crashes, mostly due to human errors (S.Singh et al., 2021). Thus, car manufacturers are calling for the need of implementing self-driving cars technology in vehicles to massively improve their safety and reliability. Of course, the reasons moving most of the players of the automotive industry towards autonomous vehicles (AVs) are manifold and not only limited to societal benefits, but merely connected to firms’ economic returns. Indeed, car companies are turning their cars into software-based products, setting the scene for data-based business models to lower entry barriers to individual mobility, while offering very attractive service packages for the customers (Aschhoff, R., & Roggenbuck, J., 2021).
Besides, driverless cars necessitate actual guidelines for their autonomous decision-making and for overcrossing the related moral and ethical aspects. So far, few governments and institutions did recognize the need for clear rules and requirements for AVs. Although car manufacturers keep developing and testing self-driving cars to eventually standardize such a disruptive technology that will widely impact our society, the lack of a proper legal framework might have the unintended consequence of slowing down their innovation process.
Moreover, consumers’ concerns do exist, and clear regulations require also to address their fears. It is true that, as driverless cars would be allowed to make critical decisions on public roads, they will potentially choose among types of damage or injuries to different individuals. An exemplary case happened in the US, where a Tesla Model S in “self-driving mode” had been involved in a mortal accident by crashing into an 18-wheeler that turned in front of the car; to avoid the accident, should the AV system have taken “illegal actions” instead of sticking to the rules and ending up in the crash? Also, who must be held responsible in event of an emergency or collision? The owner of an autonomous vehicle might suffer from personal liabilities that are not more under his control; hence, it is arguable as the fault should be attributed to the car manufacturer or to its software.
However, making justified conclusions in such tricky circumstances might be a daunting task, and questions on ethical and legal aspects must have exhaustive answers in order to ensure smooth development and adoption processes of AVs, as its growing market is said it will reach a sales volume of 21 million vehicles by 2035.
Sources
Aschhoff, R., & Roggenbuck, J. (2021). Volkswagen is accelerating transformation into software-driven mobility provider. Volkswagen Newsroom. https://www.volkswagen- newsroom.com/en/press-releases/volkswagen-is-accelerating-transformation-into- software-driven-mobility-provider-6878
Sergeenkov, A. (2019). Competition in the Autonomous Vehicle Industry is Heating Up. HackerNoon. https://hackernoon.com/competition-in-the-autonomous-vehicle- industry-is-heating-up-22524d71ca5
Singh, S., & Saini, B. S. (2021). Autonomous cars: Recent developments, challenges, and possible solutions. IOP Conference Series: Materials Science and Engineering, 1022(1), 012028. https://doi.org/10.1088/1757-899x/1022/1/012028
Interesting post! The idea of autonomous vehicles has been around for quite some time now; however, it has not really taken of ground yet. As it can clearly provide benefits like solving congestion problems and helping with public transport. Which sparks the question of why did it not take of yet? As you describe clear regulations are needed, but personally I think due to cultural differences between or within countries it can be hard to implement clear universal regulations for autonomous cars. Given your example of the moral accident in the US, how would different cultures react to this (ethical) dilemma? I am looking forward to what the future brings for autonomous vehicles and if it could ever reach its full potential.
Hey Cesare, loved your piece!
I have previously read about this problem, and had some thoughts to add.
Essentially the issue with achieving the trustworthy AI for AV is that we would need to solve real-world AI simultaneously. This means that we need to teach an AI to understand the world in the same way as us. What this implies for cars? For example that we expect a person who disappears behind a car to reappear again, at the point where the car ends. Solving 100% of such specific use cases is essential if we want our AV to be as trustworthy on the road as human drivers.
As for the firms pursuing this problem, Tesla is leading. With thousands of test users, users can flag mistakes the AV functionality made, allowing it to learn more of such use cases over time. But as the company has learned, there is a much larger amount of such use cases. Further, even if they are identified we face a moral dilemma about what to tell the AI to do.
As such, I anticipate we won’t see fully autonomous vehicles on the streets anytime soon.