When Cars Drive Themselves: Technology, Trust and Network Effects

19

September

2025

No ratings yet.

During my trip to the West Coast of USA this July, I saw self-driving taxis for the first time. The number of them driving in San Francisco was astonishing. On one hand, I was very curious and would have liked to take a ride, but on the other hand, I know I would feel uncomfortable knowing the car was driving by itself. The thought of not knowing how it would behave in the event of an accident made me really uneasy. Also, who would be liable if an accident occurred? And what about potential software errors?

Although interest in autonomous vehicles (AVs) is rising among companies such as Tesla and Waymo (the one I saw in San Francisco), a recent Financial Times article reports that David Li, co-founder of Hesai (the world’s largest maker of sensors for self-driving cars) is conservative about the pace of scaling up fully autonomous vehicles (Financial Times, 2025). On the other hand, research suggests that AVs could significantly reduce accidents compared to normal cars: the WHO found that over 90% of traffic crashes worldwide are caused by human error while a study by IIHS estimated that autonomous vehicles could prevent around 33% of crashes if they just eliminated errors like reacting too late (SharpDrive, 2025).

In the article, Li said that although approximately one million people are killed every year in car accidents, if AV kills just one person a year, that’s just one-millionth of the difference but it could destroy a company’s reputation and make survival really difficult. Personally, I think that since AVs have been shown to be the safer option, research and adoption shouldn’t be slowed down. However, Li’s point is very valid: society tolerates millions of human-caused deaths because they are considered “normal,” but a single AV-caused death is highly visible and can completely destroy trust in the company despite being the “safer” option. My question for discussion is: what do you think about this?

Waymo provides a robotaxi service through its app in cities like Phoenix, San Francisco and Los Angeles, allowing users to hail a self-driving vehicle for rides without a human driver (Waymo, n.d.). I think it’s a great example of technological disruption. First, traditional taxis were threatened by Uber and similar ride-hailing services. Now, Uber itself faces potential disruption from Waymo. This also links nicely to the content from our lectures. In terms of network effects, Uber demonstrates a classic direct network effect: more drivers lead to more riders being served and more riders attract more drivers. However, drivers are human and limited by exhaustion and availability. In the case of Waymo, the network effect is different (indirect). The value grows as complementary products (data and AI) are adopted: more vehicles generate more data, which improves AI, leading to safer and more efficient rides, which attract more customers, which generates more revenue, allowing more AVs, which in turn produces more data and the loop goes on.

Have you ever used a self-driving taxi? If so, how was your experience? If not, would you try? Also, who in your opinion should be held responsible in the event of an AV accident? The manufacturer? Software developer?

References:

Financial Times. (2025). Top sensor maker Hesai warns world not ready for fully driverless cars. https://www.ft.com/content/1cea9526-17a8-4554-a660-1c1e6d69676b.

SharpDrive. (2025). Are Self-Driving cars safer than human drivers? https://www.sharpdrive.co/post/self-driving-cars-vs-human-drivers-safety#:~:text=According%20to%20data%20from%20the,and%20it%27s%20not%20always%20safer

Waymo. (n.d.). The World’s Most Experienced Driver. https://waymo.com

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *