2001: A Space Odyssey

8

October

2020

2001-gif

I’m Sorry Dave. I’m Afraid I Can’t Let You Do That.

2001: A Space Odyssey. Stanley Kubrick’s masterpiece is regarded as one of the most influential and important art works of the 20th century, not just in cinema history but in art in general. 2001 explores mankind’s relationship to evolution, existentialism, space, technology, and artificial intelligence. It has been 52 years since the film’s release, but some of its themes are more relevant than ever before. Let’s explore.

Kubrick’s vision
kubrick

Although science fiction films typically underestimate how long it will take for technology to catch up with their visions of the future, much of Kubrick’s vision in 1968 has since come true. It was Kubrick’s goal to depict space travel, personal electronic devices, and computer technology in the most feasible, practical and believable way. He consulted NASA engineers who were involved with the Apollo space program to put a man on the moon (succeeding 1 year after the film was released. Neil Armstrong became the first man on the moon in 1969).

2001 is closer to you than you realise. Whether Kubrick was successful in his quest to depict his 1968 technological vision in a feasible, practical and believable way, will become more clear when looking at just some of the many examples:

iPad
Ipad2001

More than 40 years before Steve Jobs launched the iPad, Kubrick offered viewers already a glimpse of the future iPad. Kubrick correctly envisioned how such a sophisticated device would seamlessly be incorporated into day-to-day life. Samsung actually defended itself in a lawsuit against Apple in 2011, claiming their Galaxy Tablet wasn’t a copy of the iPad, but rather inspired by 2001.

iPhone
Iphone2001

The undeniable source of inspiration for Apple’s iPhone: the Monolith from 2001. The comparison is not only visually striking. Just like the introduction of the Monolith in the film, the iPhone allowed humankind to evolve to the next level in terms of connectivity and technology. Both items are used for telecommunications and are activated by human touch. Coincidence?

Artificial Intelligence and the Blackbox
HAL

Absolutely the star of the film: the HAL 9000. This super AI has never made a mistake and is therefore trusted with controlling this important mission. During Wednesday’s open lecture, Ting Li touched on the topic of Blackbox when it comes to AI. This means, an AI system taking unexpected actions. It is considered one of the biggest fears of companies using AI. This challenge and the possible consequences are perfectly displayed in 2001: A Space Odyssey. Without giving away any spoilers, I hope that AI developers take a close look at the film and take lessons from what Kubrick envisioned.

Tip: ask your Siri or Alexa the following question: “Open the pod bay doors”…

Conclusion
It is safe to say 2001 is one of the most important films ever created. In my eyes it is an absolute masterpiece. This is not even taking into account the philosophical aspects of the story or the innovative new film techniques used, but already solely looking at the envisioned technologies at display. It is mind blowing when you realise that this film was made prior to a man ever setting foot on the moon, more than 50 years ago. Kubrick envisioned a world filled with technologies that are currently all around us. In addition, it is hard to deny how this film has influenced a lot of the tech world leaders. From Steve Jobs, to Elon Musk. Hopefully those leaders use the film to improve their own products, both in design and in structure.

Have you seen 2001?
Did you manage to see beyond its slower pace?
Any technological films you can recommend us?
I am interested to hear your thoughts.

SPACE2001


5/5 (3)

Please rate this

Is Explainable AI (XAI) Needed?

14

October

2019

5/5 (2)

Imagine a self driving car knocks down and kills a pedestrian. Who is to blame and how can it be prevented in the future?

Such questions require artificial intelligence (AI) models to be interpretable. However, when an AI makes a decision, we are usually not able to understand how exactly the AI came to its decision and why it has chosen this specific decision (Schmelzer, 2019). This problem is also known as the black box of AI and is especially true of the most popular algorithms today (Schmelzer, 2019). Because of this decision-making problem of AI systems, a new field called Explainable AI (XAI) has emerged (Ditto, 2019). XAI aims at solving the black box problem by enabling humans to understand how the AI system came to a specific decision (Schmelzer, 2019). Specifically, it wants to explain how the parties who design and use the system are affected, how data sources and results are used and how inputs lead to outputs (Kizrak, 2019). However, there are different opinions regarding the necessity of XAI. So, what are the arguments for and against XAI?

One reason for the use of XAI is accountability. When it is known how and why the algorithm arrives at a specific decision, the algorithm becomes accountable for its actions and can be improved if it for example becomes too biased (van Rijmenam, 2019). A second reason is auditability, meaning that it is possible to test and refine the algorithms more accurately and prevent future failures (Ditto, 2019).

On the other hand, researchers and technology companies give XAI not much attention and rather focus on performance than interpretability (Kizrak, 2019). One reason against XAI is that it is too complex to explain. For example, the most popular AI models with good performance have around 100 million parameters (Paudyal, 2019). How could it be possible to explain such a model? This means that in order to make AIs explainable, one needs to consider the trade-off between performance and explainability. Furthermore, it can be argued that most of the thinking and decision-making of humans happen unconsciously (Paudyal, 2019). This would mean that we humans cannot explain all our decision either.

In my opinion, black box models cannot be avoided because we want to tackle problems that are very complex and require huge amounts of data, thereby analysing millions of parameters. While I do see some problems with current AI systems (e.g. ethics, legal accountability), I would still rather trust an AI system that statistically performs nearly faultless than an interpretable AI system with severe performance issues.

How do you think about explainable AI? Is there a need for it? Is it possible to implement XAI while not having the performance vs. interpretability trade-off?

 

References

Ditto. (2019). The death of black box AI: Why we need to focus on Explainable AI instead. Retrieved from https://www.ditto.ai/blog/black-box-ai-vs-explainable-ai

Kizrak, M., A. (2019). What Is Explainable Artificial Intelligence and Is It Needed?. Retrieved from https://interestingengineering.com/what-is-explainable-artificial-intelligence-and-is-it-needed

Paudyal, P. (2019). Should AI explain itself? or should we design Explainable AI so that it doesn’t have to. Retrieved from https://towardsdatascience.com/should-ai-explain-itself-or-should-we-design-explainable-ai-so-that-it-doesnt-have-to-90e75bb6089e

Schmelzer, R. (2019). Understanding Explainable AI. Retrieved from https://www.forbes.com/sites/cognitiveworld/2019/07/23/understanding-explainable-ai/#1872ea0c7c9e

Van Rijmenam, M. (2019). Algorithms are Black Boxes, That is Why We Need Explainable AI. Retrieved from https://medium.com/@markvanrijmenam/algorithms-are-black-boxes-that-is-why-we-need-explainable-ai-72e8f9ea5438

Please rate this