Can We Trust Deep Learning AI? Or is it a Black Box?

27

September

2018

5/5 (2)

A highly interesting article, “Deep Learning Will Radically Change the Way We Interact With Technology”, published by the Harvard Business Review illustrates how deep learning works, namely, through layers and layers of neural networks that can quickly recognize patterns and ultimately make decisions – much like the neurocortex of our own brain. Since we do not know how our own brain exactly works I was intrigued to see if we know more about how deep learning algorithms make its own decisions and patterns. Ultimately, ‘no one really knows how the most advanced algorithms do what they do. That could be a problem’ claims Knight, MIT’s senior AI editor. The question thus becomes, should we trust and take a leap of faith in using deep learning algorithms in our daily lives (both at home and at work) without understanding its process? Google, for example, has tried to explain the decision making process through its Deep Dream initiative. Unfortunately, whilst some of these tests have brought forth interesting data they all remain very superficial because a lot of context is lost on the way and not all factors are taken into account.

For me it’s hard to decide exactly if we should or should not take the leap of faith in putting our trust in deep learning AI. For example an AI named Deep Patient, developed in 2015 at a hospital in New York, was fed over 700,000 patients records for the purpose of developing accurate diagnoses of certain diseases including liver cirrhosis and varieties of cancers. Up-to-date Deep Patient has, by far, become the best predictor. To their surprise, however, Deep Patient was able to, with great accuracy, identify the onset of Schizophrenia (and other psychiatric disorders), which was never the intention. It’s mind-boggling since the onset of Schizophrenia is famously impossible to predict in the medical community. Whilst putting our faith in Deep Patient can be very valuable since preemptive treatment can commence directly before the onset, blindly trusting a decision-making process we cannot comprehend can lead to high-stake errors including the prescription of anti-psychotics to patients whom would never have psychotic disorders. As cognitive expert Dennet puts it “as long as the AI cannot explain why it is doing what it is better than us, we should not trust it.” I believe it’s in important topic to reflect on and pinpoint, since for us to benefit from it, we might need to hand over some of our power by fully trusting something we don’t understand.

References
Singh, A., 2017. Deep Learning Will Radically Change the Ways We Interact With Technology. Harvard Business Review. Retrieved from https://hbr.org/2017/01/deep-learning-will-radically-change-the-ways-we-interact-with-technology
Knight, W., 2017. The Dark Secret at the Heart of AI. MIT Technology Review. Retrieved from https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

Please rate this

1 thought on “Can We Trust Deep Learning AI? Or is it a Black Box?”

  1. Thank you very much for sharing your thoughts on the trustability of deep-learning.

    It is indeed interesting to assess how deep-learning works. Especially the reference to the above mentioned quote (e.g. Dennet puts it “as long as the AI cannot explain why it is doing what it is better than us, we should not trust it.”) motivates critical thinking.

    In my opinion I regard deep learning as semi trust worthy. However, I refer my reasoning back to an old saying: “A student can only be as good as his teacher.” The deep learning knowledge acquiring process is as followed: rather than following the old teacher-student schema as in machine learning, deep learning observes several versions of one case and identifies patterns itself. When going to image recognition for example, these patterns are based on so called voxels. Simply put, a voxel is a 3 dimension version of a pixel. When analysing a picture not only pixel of pixel, but even voxel for voxel, a machine is able to identify much more specific patterns, better than the human observation – patterns which might not even be visible to humans. Moreover, deep learning is nourished by several thousand versions of one case. It leverages on a background which exceeds the usual sourcing capabilities of human. In this regard, one can conclude that deep learning indeed is able to produce very concrete and trustworthy pattern recognition.

    However, one major pitfall which lies in the very nature of human is mistrust – but healthy mistrust in this case. Humans are not able yet to implement as good control mechanism, which results in humans relying on technologies which are too complex, and not fully understood. Hence, whenever mistakes occur, the great risk is apparent that these mistakes might be taken as false positive or even as false negatives. Given the nature of the required pattern recognition, this might carry severe consequences.

    Thus, in my opinion deep learning is semi trust worthy with an urgent need for checking mechanism.

Leave a Reply

Your email address will not be published. Required fields are marked *