Is Explainable AI (XAI) Needed?

14

October

2019

5/5 (2)

Imagine a self driving car knocks down and kills a pedestrian. Who is to blame and how can it be prevented in the future?

Such questions require artificial intelligence (AI) models to be interpretable. However, when an AI makes a decision, we are usually not able to understand how exactly the AI came to its decision and why it has chosen this specific decision (Schmelzer, 2019). This problem is also known as the black box of AI and is especially true of the most popular algorithms today (Schmelzer, 2019). Because of this decision-making problem of AI systems, a new field called Explainable AI (XAI) has emerged (Ditto, 2019). XAI aims at solving the black box problem by enabling humans to understand how the AI system came to a specific decision (Schmelzer, 2019). Specifically, it wants to explain how the parties who design and use the system are affected, how data sources and results are used and how inputs lead to outputs (Kizrak, 2019). However, there are different opinions regarding the necessity of XAI. So, what are the arguments for and against XAI?

One reason for the use of XAI is accountability. When it is known how and why the algorithm arrives at a specific decision, the algorithm becomes accountable for its actions and can be improved if it for example becomes too biased (van Rijmenam, 2019). A second reason is auditability, meaning that it is possible to test and refine the algorithms more accurately and prevent future failures (Ditto, 2019).

On the other hand, researchers and technology companies give XAI not much attention and rather focus on performance than interpretability (Kizrak, 2019). One reason against XAI is that it is too complex to explain. For example, the most popular AI models with good performance have around 100 million parameters (Paudyal, 2019). How could it be possible to explain such a model? This means that in order to make AIs explainable, one needs to consider the trade-off between performance and explainability. Furthermore, it can be argued that most of the thinking and decision-making of humans happen unconsciously (Paudyal, 2019). This would mean that we humans cannot explain all our decision either.

In my opinion, black box models cannot be avoided because we want to tackle problems that are very complex and require huge amounts of data, thereby analysing millions of parameters. While I do see some problems with current AI systems (e.g. ethics, legal accountability), I would still rather trust an AI system that statistically performs nearly faultless than an interpretable AI system with severe performance issues.

How do you think about explainable AI? Is there a need for it? Is it possible to implement XAI while not having the performance vs. interpretability trade-off?

 

References

Ditto. (2019). The death of black box AI: Why we need to focus on Explainable AI instead. Retrieved from https://www.ditto.ai/blog/black-box-ai-vs-explainable-ai

Kizrak, M., A. (2019). What Is Explainable Artificial Intelligence and Is It Needed?. Retrieved from https://interestingengineering.com/what-is-explainable-artificial-intelligence-and-is-it-needed

Paudyal, P. (2019). Should AI explain itself? or should we design Explainable AI so that it doesn’t have to. Retrieved from https://towardsdatascience.com/should-ai-explain-itself-or-should-we-design-explainable-ai-so-that-it-doesnt-have-to-90e75bb6089e

Schmelzer, R. (2019). Understanding Explainable AI. Retrieved from https://www.forbes.com/sites/cognitiveworld/2019/07/23/understanding-explainable-ai/#1872ea0c7c9e

Van Rijmenam, M. (2019). Algorithms are Black Boxes, That is Why We Need Explainable AI. Retrieved from https://medium.com/@markvanrijmenam/algorithms-are-black-boxes-that-is-why-we-need-explainable-ai-72e8f9ea5438

Please rate this