Explainable AI – Understanding how Artificial Intelligence “Thinks”

7

October

2018

No ratings yet.

AI is prone to basis in its’ data analysis that can have potentially fatal consequences. In 2017, an algorithm supposed to predict the degree of care that a patient should be given in a hospital, decided that patients with asthma are in the lowest risk group due to the better care that these patients received in the data set that the computer was analysing (Feudtner et al, 2009).

 

Avoiding Mistakes

In most of the cases, AI operates in a black box, making it impossible for humans to understand the reasoning of how the AI came to a particular outcome. The field of explainable AI is concerned with AI giving reasons for its decisions and hence moving to a glass box instead of the black box. The main problem in this field is the trade-off between complexity, and not knowing what the AI is doing, and simplicity, reducing the functionality of AI (PricewaterhouseCoopers, 2018). Researchers are trying to overcome this by creating AI that is reading AI,  looking at the long, machine-readable text data and recoding it back to information that is understandable to a human (Gershgorn, 2016).

Screen Shot 2018-10-07 at 18.07.36

Source: https://towardsdatascience.com/explainable-ai-the-data-scientists-new-challenge-f7cac935a5b4

 

What is it good for?

Apart from making systems prone to bias, understanding the reasoning of AI might also help to build better AI since it is often very hard for programers to figure out why AI is not performing as requested (Gershgorn, 2017).

Understanding why AI is making decisions, would not only enable us to find logical mistakes in the “reasoning of AI”, but give us a tool to learn ourselves. Think about AI used to diagnosing heart diseases based on MRT scans. Understanding where AI is “looking” at in the scans might enable cardiologists and radiologists to find new manufacturizations of a diseases and  ultimately improve the human knowledge.

 

The next Holy Grail of AI

To me developing working big scale Explainable AI would mark the next Holy Grail in this area, and even beyond. Being able to utilise the analysis skills of AI coupled with human reasoning and our ability for cross-functional thinking by giving “exchanging thoughts” with AI, would give mankind the most powerful tool of the 21st century.

 

References: 

Diop, M. D. (2018, June 14). Explainable AI: The data scientists’ new challenge – Towards Data Science. Retrieved October 7, 2018, from https://towardsdatascience.com/explainable-ai-the-data-scientists-new-challenge-f7cac935a5b4
Feudtner, C., Levin, J. E., Srivastava, R., Goodman, D. M., Slonim, A. D., Sharma, V., . . . Hall, M. (2009). How Well Can Hospital Readmission Be Predicted in a Cohort of Hospitalized Children? A Retrospective, Multicenter Study. Pediatrics,123(1), 286-293. doi:10.1542/peds.2007-3395
Gershgorn, D. (2016, December 20). We don’t understand how AI make most decisions, so now algorithms are explaining themselves. Retrieved October 7, 2018, from https://qz.com/865357/we-dont-understand-how-ai-make-most-decisions-so-now-algorithms-are-explaining-themselves/
Gershgorn, D. (2017, December 18). AI is now so complex its creators can’t trust why it makes decisions. Retrieved October 7, 2018, from https://qz.com/1146753/ai-is-now-so-complex-its-creators-cant-trust-why-it-makes-decisions/
PricewaterhouseCoopers. (2018). Explainable AI. Retrieved October 7, 2018, from https://www.pwc.co.uk/services/audit-assurance/risk-assurance/services/technology-risk/technology-risk-insights/explainable-ai.html
Sample, I. (2017, November 05). Computer says no: Why making AIs fair, accountable and transparent is crucial. Retrieved October 7, 2018, from https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial

 

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *