‘Dueling AI’: The Path to Unsupervised Learning?

14

October

2018

No ratings yet.

“What an AI cannot create, an it does not understand”, states Ian Goodfellow – a top AI researcher currently working at Google. Presently he studies ‘generative models’ which are AI’s that can create or generate real world transmissions such as sounds, images, etc. He explains the intent of his research by saying, “If an AI can imagine the world in realistic detail—learn how to imagine realistic images and realistic sounds—this encourages the AI to learn about the structure of the world that actually exists”, ultimately, he means to have an AI that understand what it sees. To reach his goal, Goodfellow aims to make AI smarter through the utilization of AI. How? Through a concept or technique he classifies as “Generative Adversarial Networks” (GAN) or what we may call ‘dueling AI’. The idea is actually quite straightforward; imagine having two AI’s, where one is able to create sounds and/or images (creator) that are completely realistic –to the human eye- and, a second AI that evaluates (evaluator) if the sound and/or image generated is fake or not. Basically there exists a feedback loop, where the evaluator will initially identify a sound/image as fake, fostering the creator to learn and identify the parts that deemed its creation fake. Thus, allowing the AI to recreate the images/sounds resulting in outcomes considered real by the evaluator.

Essentially, what Goodfellow might have initiated is the evolution of Machine Learning (which we read about and discussed in the 2nd week of the course) from supervised learning to unsupervised learning. It occurred to me, is it possible that this is the initial push towards the independence of artificial intelligence in all its forms from us? And what will be the aftermath be?

Clearly in terms of Business it will increase service and customer satisfaction since it can increase the reliability of AI immensely. Maybe it could even overcome the issue of privacy and confidentiality. For example in healthcare an AI that incorporates GAN technology could construct ‘fake’ patient healthcare records that cannot be differentiated as fake (since they are so real to ones that exist) which then another AI could utilize to improve the diagnosis and treatment of patients.

To me its quite an incredible concept with immense applicability in real-life business situations, and can even deteriorate existing flaws that currently limit the use of AI – such as privacy. I would be intrigued to hear of your ideas on its application, or maybe potential pitfalls of it usefulness and reliability.

References:
https://www.wired.com/2017/04/googles-dueling-neural-networks-spar-get-smarter-no-humans-required/
https://www.forbes.com/sites/forbestechcouncil/2018/07/18/three-breakthroughs-that-will-disrupt-the-tech-world-in-2019/#67acadf31f87

Please rate this

Can We Trust Deep Learning AI? Or is it a Black Box?

27

September

2018

5/5 (2)

A highly interesting article, “Deep Learning Will Radically Change the Way We Interact With Technology”, published by the Harvard Business Review illustrates how deep learning works, namely, through layers and layers of neural networks that can quickly recognize patterns and ultimately make decisions – much like the neurocortex of our own brain. Since we do not know how our own brain exactly works I was intrigued to see if we know more about how deep learning algorithms make its own decisions and patterns. Ultimately, ‘no one really knows how the most advanced algorithms do what they do. That could be a problem’ claims Knight, MIT’s senior AI editor. The question thus becomes, should we trust and take a leap of faith in using deep learning algorithms in our daily lives (both at home and at work) without understanding its process? Google, for example, has tried to explain the decision making process through its Deep Dream initiative. Unfortunately, whilst some of these tests have brought forth interesting data they all remain very superficial because a lot of context is lost on the way and not all factors are taken into account.

For me it’s hard to decide exactly if we should or should not take the leap of faith in putting our trust in deep learning AI. For example an AI named Deep Patient, developed in 2015 at a hospital in New York, was fed over 700,000 patients records for the purpose of developing accurate diagnoses of certain diseases including liver cirrhosis and varieties of cancers. Up-to-date Deep Patient has, by far, become the best predictor. To their surprise, however, Deep Patient was able to, with great accuracy, identify the onset of Schizophrenia (and other psychiatric disorders), which was never the intention. It’s mind-boggling since the onset of Schizophrenia is famously impossible to predict in the medical community. Whilst putting our faith in Deep Patient can be very valuable since preemptive treatment can commence directly before the onset, blindly trusting a decision-making process we cannot comprehend can lead to high-stake errors including the prescription of anti-psychotics to patients whom would never have psychotic disorders. As cognitive expert Dennet puts it “as long as the AI cannot explain why it is doing what it is better than us, we should not trust it.” I believe it’s in important topic to reflect on and pinpoint, since for us to benefit from it, we might need to hand over some of our power by fully trusting something we don’t understand.

References
Singh, A., 2017. Deep Learning Will Radically Change the Ways We Interact With Technology. Harvard Business Review. Retrieved from https://hbr.org/2017/01/deep-learning-will-radically-change-the-ways-we-interact-with-technology
Knight, W., 2017. The Dark Secret at the Heart of AI. MIT Technology Review. Retrieved from https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

Please rate this