Can we trust the Dark Black Box?

11

October

2017

No ratings yet.

The car industry is moving fast and many companies such as Google, Tesla and General Motors are experimenting with self-driving cars. But chipmaker company Nvidia is taking it to a next level – unlike other models, the car does not rely on given instructions by its programmers, but is 100% self-taught by watching humans do it (Nieger, 2017). How is that possible, you might ask? Well, hard to say because even its own programmers don’t fully understand the decision-making. All decisions are combined in a complex web inside the vehicle, making it almost impossible for developers to understand the line of thought. And this is alarming. Until now the car has been consistent, but happens if it makes an irrational decision? Should we trust a self-operating system which we as humans do not completely understand? (Knight, 2017b)

This underlying AI technology, also referred to as deep learning, has already proven to be very useful and is deployed for practices like image captioning and voice recognition. However, since developers are now experimenting with more advanced applications, the risk is increasing. Because of the complex and opaque characteristic of deep learning, biases can easily be trapped inside, basically creating a “Dark Black Box” (Knight, 2017a). Systems are often so complicated that even the engineers designing it struggle to give reasoning behind the actions. This might not be a big deal when it comes to image recognition on Facebook, but it is alarming for the future when AI will be deployed for other practices. There is now hope that these same techniques will be used for more tasks such as diagnosing deadly diseases and detecting million-dollar trading opportunities (Knight, 2017c). Decisions which can have major impacts on societal, business and personal levels. This should not happen unless we find a way to make deep learning more understandable and visible to its designers and users. Until then, we should probably keep driving our own cars.

Knight, W. (2017a). Forget Killer Robots—Bias Is the Real AI Danger [online] Technology Review. Available at: https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/ [Accessed 09 October 2017]
Knight, W. (2017b). The Dark Secret at the Heart of AI, [online] Technology Review. Available at: https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/[Accessed 09 October 2017]
Knight, W. (2017c). Biased Algorithms Are Everywhere and No One Seems to Care, [online] Technology Review. Available at: https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/ [Accessed 09 October 2017]
Neiger, C. (2017). Nvidia is distancing itself from its driverless car tech competition, [online] Business Insider. Available at: http://www.businessinsider.com/nvidia-stock-price-driverless-car-tech-competition-2017-7?international=true&r=US&IR=T [Accessed 09 October 2017]

Please rate this

1 thought on “Can we trust the Dark Black Box?”

  1. Dear Marilou,
    First of all, thank you for a very interesting article. I wholeheartedly agree with your conclusions that systems, which have such high impact on societal, business and personal levels of human existence, should at least be to some point understandable by the side mostly affected, i.e., people. One of the questions, which are asked most often in the context of autonomous cars, is: who would be to blame in case of an unfortunate accident? Of course, the car itself or the AI responsible for driving it cannot be held liable in the court of law. So who is to blame? The manufacturer? The programmers behind the AI? And in case of deep learning algorithms, they do not even understand the decision making process guiding the AI’s actions.
    In my opinion deep learning offers incredible possibilities in many aspects of human lives. The AI can help with faster detection of cancer cells, recognize symptoms of diseases, provide valuable business insights and much, much more. However, as people do not fully understand the logic of the AI decision making, human factor should always be included in the equation. Doctors should supervise all medical diagnoses suggested by machine learning algorithms; managers should verify the sense of the insights provided etc.. In case of autonomous cars, deep learning shows incredible performance when it comes to image analysis, for example, detecting a pedestrian on the road. A more conscientious approach is the one followed by Drive.ai, a startup developing autonomous cars. They use artificial intelligence for image processing and the decision making, but they also implement some rules and human knowledge, to make it safer (Ackerman, 2017). Furthermore, they devised a couple of possible tests to make sure they are confident about how the system works.
    Summing up, I think that deep learning and artificial intelligence are incredible technologies, which should be further investigated and developed. But people cannot forget to include the human factor into the decision making process, as the technology’s ultimately aim is to guide us, humans.
    References:
    Ackerman, E. (2017) How Drive.ai Is Mastering Autonomous Driving With Deep Learning, Retrieved from: https://spectrum.ieee.org/cars-that-think/transportation/self-driving/how-driveai-is-mastering-autonomous-driving-with-deep-learning

Leave a Reply

Your email address will not be published. Required fields are marked *