Is Explainable AI (XAI) Needed?

14

October

2019

5/5 (2)

Imagine a self driving car knocks down and kills a pedestrian. Who is to blame and how can it be prevented in the future?

Such questions require artificial intelligence (AI) models to be interpretable. However, when an AI makes a decision, we are usually not able to understand how exactly the AI came to its decision and why it has chosen this specific decision (Schmelzer, 2019). This problem is also known as the black box of AI and is especially true of the most popular algorithms today (Schmelzer, 2019). Because of this decision-making problem of AI systems, a new field called Explainable AI (XAI) has emerged (Ditto, 2019). XAI aims at solving the black box problem by enabling humans to understand how the AI system came to a specific decision (Schmelzer, 2019). Specifically, it wants to explain how the parties who design and use the system are affected, how data sources and results are used and how inputs lead to outputs (Kizrak, 2019). However, there are different opinions regarding the necessity of XAI. So, what are the arguments for and against XAI?

One reason for the use of XAI is accountability. When it is known how and why the algorithm arrives at a specific decision, the algorithm becomes accountable for its actions and can be improved if it for example becomes too biased (van Rijmenam, 2019). A second reason is auditability, meaning that it is possible to test and refine the algorithms more accurately and prevent future failures (Ditto, 2019).

On the other hand, researchers and technology companies give XAI not much attention and rather focus on performance than interpretability (Kizrak, 2019). One reason against XAI is that it is too complex to explain. For example, the most popular AI models with good performance have around 100 million parameters (Paudyal, 2019). How could it be possible to explain such a model? This means that in order to make AIs explainable, one needs to consider the trade-off between performance and explainability. Furthermore, it can be argued that most of the thinking and decision-making of humans happen unconsciously (Paudyal, 2019). This would mean that we humans cannot explain all our decision either.

In my opinion, black box models cannot be avoided because we want to tackle problems that are very complex and require huge amounts of data, thereby analysing millions of parameters. While I do see some problems with current AI systems (e.g. ethics, legal accountability), I would still rather trust an AI system that statistically performs nearly faultless than an interpretable AI system with severe performance issues.

How do you think about explainable AI? Is there a need for it? Is it possible to implement XAI while not having the performance vs. interpretability trade-off?

 

References

Ditto. (2019). The death of black box AI: Why we need to focus on Explainable AI instead. Retrieved from https://www.ditto.ai/blog/black-box-ai-vs-explainable-ai

Kizrak, M., A. (2019). What Is Explainable Artificial Intelligence and Is It Needed?. Retrieved from https://interestingengineering.com/what-is-explainable-artificial-intelligence-and-is-it-needed

Paudyal, P. (2019). Should AI explain itself? or should we design Explainable AI so that it doesn’t have to. Retrieved from https://towardsdatascience.com/should-ai-explain-itself-or-should-we-design-explainable-ai-so-that-it-doesnt-have-to-90e75bb6089e

Schmelzer, R. (2019). Understanding Explainable AI. Retrieved from https://www.forbes.com/sites/cognitiveworld/2019/07/23/understanding-explainable-ai/#1872ea0c7c9e

Van Rijmenam, M. (2019). Algorithms are Black Boxes, That is Why We Need Explainable AI. Retrieved from https://medium.com/@markvanrijmenam/algorithms-are-black-boxes-that-is-why-we-need-explainable-ai-72e8f9ea5438

Please rate this

Explainable AI – Understanding how Artificial Intelligence “Thinks”

7

October

2018

No ratings yet.

AI is prone to basis in its’ data analysis that can have potentially fatal consequences. In 2017, an algorithm supposed to predict the degree of care that a patient should be given in a hospital, decided that patients with asthma are in the lowest risk group due to the better care that these patients received in the data set that the computer was analysing (Feudtner et al, 2009).

 

Avoiding Mistakes

In most of the cases, AI operates in a black box, making it impossible for humans to understand the reasoning of how the AI came to a particular outcome. The field of explainable AI is concerned with AI giving reasons for its decisions and hence moving to a glass box instead of the black box. The main problem in this field is the trade-off between complexity, and not knowing what the AI is doing, and simplicity, reducing the functionality of AI (PricewaterhouseCoopers, 2018). Researchers are trying to overcome this by creating AI that is reading AI,  looking at the long, machine-readable text data and recoding it back to information that is understandable to a human (Gershgorn, 2016).

Screen Shot 2018-10-07 at 18.07.36

Source: https://towardsdatascience.com/explainable-ai-the-data-scientists-new-challenge-f7cac935a5b4

 

What is it good for?

Apart from making systems prone to bias, understanding the reasoning of AI might also help to build better AI since it is often very hard for programers to figure out why AI is not performing as requested (Gershgorn, 2017).

Understanding why AI is making decisions, would not only enable us to find logical mistakes in the “reasoning of AI”, but give us a tool to learn ourselves. Think about AI used to diagnosing heart diseases based on MRT scans. Understanding where AI is “looking” at in the scans might enable cardiologists and radiologists to find new manufacturizations of a diseases and  ultimately improve the human knowledge.

 

The next Holy Grail of AI

To me developing working big scale Explainable AI would mark the next Holy Grail in this area, and even beyond. Being able to utilise the analysis skills of AI coupled with human reasoning and our ability for cross-functional thinking by giving “exchanging thoughts” with AI, would give mankind the most powerful tool of the 21st century.

 

References: 

Diop, M. D. (2018, June 14). Explainable AI: The data scientists’ new challenge – Towards Data Science. Retrieved October 7, 2018, from https://towardsdatascience.com/explainable-ai-the-data-scientists-new-challenge-f7cac935a5b4
Feudtner, C., Levin, J. E., Srivastava, R., Goodman, D. M., Slonim, A. D., Sharma, V., . . . Hall, M. (2009). How Well Can Hospital Readmission Be Predicted in a Cohort of Hospitalized Children? A Retrospective, Multicenter Study. Pediatrics,123(1), 286-293. doi:10.1542/peds.2007-3395
Gershgorn, D. (2016, December 20). We don’t understand how AI make most decisions, so now algorithms are explaining themselves. Retrieved October 7, 2018, from https://qz.com/865357/we-dont-understand-how-ai-make-most-decisions-so-now-algorithms-are-explaining-themselves/
Gershgorn, D. (2017, December 18). AI is now so complex its creators can’t trust why it makes decisions. Retrieved October 7, 2018, from https://qz.com/1146753/ai-is-now-so-complex-its-creators-cant-trust-why-it-makes-decisions/
PricewaterhouseCoopers. (2018). Explainable AI. Retrieved October 7, 2018, from https://www.pwc.co.uk/services/audit-assurance/risk-assurance/services/technology-risk/technology-risk-insights/explainable-ai.html
Sample, I. (2017, November 05). Computer says no: Why making AIs fair, accountable and transparent is crucial. Retrieved October 7, 2018, from https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial

 

Please rate this