Many posts in this blog have touched upon the technical issues and business implications behind artificial intelligence (AI). My purpose with this post is to provide food for thought for an ethical discussion of the issue. To do so, I am writing a short review of the movie Ex Machina which elaborates on the issue of sentience of machines. The movie was released in 2015 and has received many positive critics and won two Oscars. Manohla Dargis from The New York Times called it a “smart, sleek movie about men and the machines they make” (Dargis, 2015).
If you haven’t watched the movie yet, I strongly recommend doing so before reading the blogpost – Spoiler Alarm! Check out the trailer here: https://www.youtube.com/watch?v=XYGzRB4Pnq8
Caleb is a programmer who works for the search engine giant Blue Book. As he has won a contest, Caleb is invited to visit the company’s CEO Nathan at his house. Upon his arrival at the house which is isolated in the mountains and can only be reached by helicopter, Caleb learns that the house is also a research facility where Nathan has been developing humanoid robots with artificial intelligence. Caleb’s task is to test and judge the consciousness of Ava, Nathan’s first robot.
Although Caleb knows that Ava is artificial, he develops a close relationship to her. Eventually, Ava convinces Caleb to help her escape, creating the plan to deceive Nathan and leaving the house together.
When Nathan finds out about the plan, he knocks out Caleb but is later killed by Ava. In that fight, Ava got damaged but was able to fix herself and fully take on the appearance of a human woman. She escapes the house and ignores the screams of Caleb, leaving him trapped in the facilities.
How will Ava survive in the outside world? Does she have enough intelligence to remain a functioning robot? Will the society detect, and if yes, even accept her? Will she do harm to human beings? How much value will she contribute to society?
When humanoid robots become reality, these are only some of the questions we will have to ask ourselves. As AI technology develops, we as a society continuously have to decide how much we want AI to be a part of our lives.
Dargis, M. (2015). Review: In ‘Ex Machina,’ a Mogul Fashions the Droid of His Dreams. [online] Nytimes.com. Available at: https://www.nytimes.com/2015/04/10/movies/review-in-ex-machina-a-mogul-fashions-the-droid-of-his-dreams.html [Accessed 15 Oct. 2017].
Dear Giuliana, I saw the movie and agree that the ethical/philosophical approach on AI and machine learning is very interesting. As an addition to your blog, I would like to apply the topic to the article of Brynjolfsson and McAfee (2017) in which a reverse version of Polayni’s paradox is explained.
In the original version of the paradox humans know more than they can tell us, which means we are often unable to tell how we are able to approach and perform certain tasks. In the reverse version of the Polanyi’s paradox, machines know more than they can tell us. This is a result of the low interpretability that comes along with AI with the use of machine learning. It shows the difficulty we have to understand how the machine got to its decision because it is based on millions of pieces of data.
Given this low interpretability, Brynjolfsson and McAfee (2017) mention several risks which follow from the use of machine learning:
– First, machines may have hidden biases derived from the data on which they base their decisions, like race, gender, ethnicity.
– Second, neural networks deal with statistical truths instead of literal truths. In this way, it is hard to prove that the system works in all cases. Or as shown in the movie, that the machine will operate against us.
– Third, in case they make errors, diagnosing and correcting what went wrong can be very difficult.
Many of these risks are closely related to aspects of the movie, which shows again that it is important to address these ethical issues accordingly. On one hand to ensure that machines do not harm humans or other relevant beings and on the other hand to take into account the possible moral status of the machines themselves in society (Bostrom and Yudkowsky, 2011).
Brynjolfsson, E. and McAfee, A. (2017). The Business of Artificial Intelligence. Harvard Business Review.
Bostrom, N and Yudkowsky, E (2011) “The Ethics of Artificial Intelligence.” In Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William Ramsey. New York: Cambridge University Press