Self-driving trouble?

8

September

2016

No ratings yet.

Google recently released that their self-driving cars were involved in 11 minor crashes over the past six years. Besides that, the first deadly accidents already occurred with the self-driving cars of Tesla being on the road these days. Which has raised questions about when such autonomous vehicles will be ready for the real world. “If you want to get to the level where you could put the elementary school kid into the car and it would take the kid to school with no parent there, or the one that’s going to take a blind person to their medical appointment, that’s many decades away,” Shladover told Live Science. Agree? Beneath a couple of concerns I think have to be taking into account before putting those cars on the roads. Maybe you already have a solution in mind?

Let’s start with something you would not think of right away: pot holes; from a dark patch in the road to an oil spot. All the radars these self-driving cars could scan almost everything happening on the surface, but not (yet) discover these nasty holes which lie below the road surface and could be dangerous.

The second problem could be the digital mapping. The self-driving cars are driving on roads that all have highly detailed (3d) maps and combine these with the readings of their sensors to find their way. We have to keep in mind that only a few roads in the world are mapped like these and what about constructions or detours?

And what to think about unpredictable humans? Something technology nowadays can’t control. Vehicles would have to deal with them while passing cars where it is not allowed or driving the wrong direction on a one-way street.

Something else that is quite unpredictable: the weather. Falling snow or rain can make it difficult for driverless cars to stay in their lane when there is a coating of snow. It could also make it difficult for sensors to identify important obstacles.

Last but not least: ethics. This is one of the biggest issues that companies face working with these self-driving cars. How do you decide with algorithms which live is worth the most in a split second?

Would like to know your thoughts about this article.

Please rate this

3 thoughts on “Self-driving trouble?”

  1. Thanks for this interesting blog post. I’d like to share my two cents to the things you mentioned.
    When you mentioned the pot holes, a certain scenario came to my mind. Assume you’re driving 130kph on the highway and you see a pothole at the last moment. You want to avoid it, jerk the wheel and lose control over your car, causing a crash with casualties. The autonomous driven car would just not see the pothole, drive over it, causing only probable damage to the car’s bottom. What I’m trying to point out here is that humans tend to always avoid crashes or damage to their car but in that way they actually increase the probability for more severe crashes because they react in a way that makes them loose the control over their car. I don’t think this is the case with self-driving cars.
    For the mapping problem, I think that a responsible government that allows autonomous driving should implement a program that sends every construction site and detour to the map services that are used in self-driving cars. It should also equip things like traffic pylons with small signal transmitters that can be received by self-driving vehicles, so they know where exactly the obstacles are.
    I realize that this topic causes a lot of discussion and skepticism but when Gottlieb Daimler built his first car (vmax: 16kph) 130 years ago, people said it was too fast and too dangerous. So I’m pretty sure the opinions on self-driving cars will certainly change in the coming years.

  2. Thank you for this post on self-driving cars. A interesting topic as self-driving cars are on the frontier of many aspects (such as ethics and technological possibilities). I would like to comment on you article and share my view on this topic.

    Although the automated Google cars were involved in several accidents, none of the those were caused by the self-driving car itself (1). So maybe even an extra reason in favour of self-driving cars(!). As far as my knowledge, there has ‘only’ been one deadly accident involving a Tesla car with the AutoPilot-modus active. But for the system of Tesla is still in Beta (2) and the driver is warned multiple times not to use the AutoPilot-modus without supervision, the question arises whether the self-driving car is really to blame?

    And that brings me to an question regarding self-drinving vehicles that is more pressing. Most of the (technical) issues the article arises will probably be overcome in the next few years. For example, Ford, Volvo and Google are already testing self-driving cars in the snow and rain (3). So the ethical side of the self-driving car seems more pressing to me. Who should be responsible when something does go wrong. Should a self-driving car be supervised by a driver with an drivers license at all times? That would spoil the purpose of a self-driving car in my view. So should the the manufacturer be responsible? Or even the government authority who approves the car for sale in a given nation? And how should the government test whether the software is ‘safe’ for the real world?

    Interesting times lay ahead.

    (1)
    https://www.theguardian.com/technology/2015/may/12/google-acknowledges-its-self-driving-cars-had-11-minor-accidents

    (2)
    http://www.nytimes.com/interactive/2016/07/01/business/inside-tesla-accident.html

    (3)
    http://qz.com/637509/driverless-cars-have-a-new-way-to-navigate-in-rain-or-snow/

    1. Building on what Hendrik and Jeroen have said.
      The technical capabilities for fully autonomous driving are here already, and the few systems that lack the required precision for such driving will most likely see the light in the coming two years. One example is fully stereoscopic 360* LIDAR, which enables vehicles to create a fully synthetic map of their surroundings and react accordingly. The technical capabilities have existed for a while, but it’s not ready for cheap, consumer-grade adoption just yet. It’ll come quickly enough, though. So the tech is here.
      What consumers need to realise and accept is that fully autonomous systems are orders of magnitude safer and more reactive than the human mind and body can possibly ever hope to be. Man as a species, in the evolutionary sense of things, was never made to be sat at the wheel of one-and-a-half tons of aluminium and whizz by at 130km/h. Autonomous systems react much, much quicker than our minds do, and can see and process millions more datapoints a second than we can, too. Present equipment can recognise and point out pedestrians in the complete darkness up to 160m in front of the car, keep the vehicle neatly in line with the road and shift gears more efficiently basing itself on GPS positioning and road conditions and elevation profile ahead. It’s pretty crazy, if you think about it. (1)

      We’re stuck in the belief that machines like this need to be operated by people who have been declared “competent” enough to drive them. Yet the test required to prove competence hardly takes into account the numerous factors that come in play during real-world driving such as fatigue, distractions and changing road conditions. All factors than can (or will be) alleviated by enabling fully autonomous systems to take the wheel.

      The problem is that the switch to a fully autonomous vehicle is akin to jumping into cold water. There’s no halfway point, as Jeroen puts it. Fully autonomous requires the removal of the impression that the “driver” needs to be able to take control of the vehicle, and that’s something that most people today aren’t willing to accept due to wrong impressions about computer systems and “robots” taking control and failing. In the grand scheme of things, though computer systems fail extremely rarely. It’s just that we put so many responsibilities on them that the consequences of a failure are often exacerbated (think road signage failing; powerplant control systems locking up and aeroplane autopilots experiencing shutdowns – they’re flukes, yet that’s exactly why they stand out so much. Autonomous driving is still very new for us to process. I can attest to that – from experience, being shuttled away on the motorway by a car that’s doing mostly everything on its own is very, very strange at first. It takes getting used to, but after a while the potential starts to shine through.

      On to responsibilities. Who’s held responsible in case of a crash? The catch-22 at this point in time is that there is currently no legal framework that is ready to accept fully autonomous driving and all of its scenarios – and as such, car manufacturers are hesitant to push the rollout of the systems as it would enter grey legal territory. Volvo, as the first to do so, announced a while back that it would take full liability for all the crashes that would happen under the use of their future Intellisafe Autopilot system. Combined with their promise to entirely eradicate fatal accidents in any of their recent cars by 2020, this is a promising outlook (2). We’ll likely see more manufacturers take the matter in their own hands and do the same in the near future.

      I realise this is but a succinct thought on the matter – there’s entire tirades to be written on the mix of autonomous and non-autonomous cars on the roads, failing systems and algorithmic “trading” one wrong for the other.

      I am a firm believer that these cars are much, much closer to us than we often think. Yes, there will be failure, fatal crashes by the dozens and cars refusing to start. There will be legal troubles and lawsuits that will lead to massive settlements. Yet that’s exactly how we learn. SpaceX just had a booster blow up on their launch pad. (3) It’s not the first failure they’ve had, and it certainly won’t be the last. But they continue on their quest of getting us to Mars and making space more affordable than it’s ever been. This is what engineering is all about: failing and learning in the quest to unlock the future.

      And if the future holds virtually no accidents on the roads, resting in the car and arriving fresh at destination, sign me up. I’ll be on the back seat finishing level 9 in Monument Valley while the car zooms along the A1.

      (1) https://en.wikipedia.org/wiki/Mercedes-Benz_S-Class_(W222)#Equipment
      (2) http://www.volvocars.com/intl/about/our-innovation-brands/intellisafe/intellisafe-autopilot/this-is-autopilot/autonomous-drive-in-detail
      (3) http://www.bbc.com/news/business-37316836

Leave a Reply

Your email address will not be published. Required fields are marked *