“Artificial Intelligence is more dangerous than North Korea”

21

October

2017

5/5 (2)

One of the subjects covered in this course, is the phenomenon of Artificial Intelligence (AI). When I discussed this topic with others, I noticed great excitement among them on this topic. For sure they are right, because the possibilities seem endless with AI. However, during these talks, I remembered a tweet of Elon Musk. On August 12th, he stated that AI is “vastly more risk than North Korea” (Musk, 2017).

 

Not only the founder of Tesla is worried about AI, Bill Gates and Stephen Hawking are ‘terrified’ of AI as well. In interviews, Musk stated that AI could be the reason for a World War III. Therefore, he started OpenAI, a non profit organization which focusses on building safe AI. In this blog I will try to discuss the statement of Musk, and show why the future of AI is not just bright and glorious. The ideas presented come from principal ideas of OpenAI.

 

In another statement on AI, Musk holds that AI is a “fundamental existential risk for human civilization” (Sulleyman, 2017). In this interview, he states that accidents, airplane crashes and bad drugs are harmful for individuals in society. They are not harmful for society as a whole. AI, according to Musk, is a possible threat for the whole of society, for all of human civilization.

 

First of all, it should be clear that Musk is not talking about AI that is used by companies like Uber, Google and Microsoft. Musk focusses on artificial general intelligence, some conscious, extremely-intelligent, recursive self improving entity. However, some of the first mentioned ways of AI are nowadays already smarter than human brain. Musk, and plenty of others, are concerned about the idea that these entities will be able to improve themselves and will become such super intelligent entity with its own free will. In this way, mankind will lose its power. This may seem like a futuristic nightmare or like a science fiction movie. But the threat seems real.

 

Furthermore, Musk warns us for AI in another way. In 2017, Musk was one of the 100 signatories calling for a UN-law which bans lethal autonomous weapons. If such artificial intelligent weapon will be developed, armed conflict will be greater than ever. Besides that, such weapons can be used by terrorist against innocent population. In this way, the consequences can not be overseen. Therefore, AI may lead to way more violence.

 

Therefore, the threat consists of two parts. Firstly, the threat of an extremely-intelligent, self learning entity which could develop its own will, by which mankind will lose power. Secondly, the development of AI which can cause enormous violence, and even bigger harms if the wrong people can use them. These threats are only real if the legal situation around AI does not change. Nowadays, it is relatively easy, if you possess knowledge on AI, to make new developments in this field.

 

Elon Musk does also provide a solution for the possible threat. With statements like the title of this blog, Musk tries to make us and the legislative power aware of these developments. According to him, the solution is proactive regulation. Nowadays, regulation in this field is made after something bad happens. As mentioned earlier, AI becomes the most dangerous if the wrong persons are able to work with it. With the statement, he therefore tries to raise attention of persons like Trump, that there is a bigger potential threat than the U.S. president is dealing with right now. Do you think as well, that Trump focusses too little on the potential danger of AI? Are you still as positive on AI as you were before?

 

 

 

 

References:

 

Finlay, S. (2017, August 18). We Should Be as Scared of Artificial Intelligence as Elon Musk Is. Fortune.

Hern, A. (2017, September 4). Elon Musk says AI could lead to third world war. The Guardian.

OpenAI. (2015, December 11). Introducing OpenAI. Retrieved from OpenAI: https://blog.openai.com/introducing-openai/

Musk, E.  (2017, August 12). Tweet: IA is vastly more risk than North Korea. Twitter.

Sulleyman, A. (2017, July 15). Elon Musk: AI is a fundamental risk. Independent.

Vincent, J. (2017, July 17). Elon Musk says we need to regulate AI before it becomes a danger to humanity. The Verge.

 

 

 

 

 

 

 

Please rate this

1 thought on ““Artificial Intelligence is more dangerous than North Korea””

  1. I certainly do agree with you that AI brings a lot of critical and ethical questions to the table. As AI still is a relatively unexplored area, we should start the dialogue about what we want to use AI for, and what we don’t want to use it for. As for your point about AI-systems improving themselves and developing a free will, I am a little more sceptical about that. Of course, we don’t know how the technology will develop in the coming years, but at the moment we are not close to sytems with these capabilities. For now, machines only have a limited autonomy. An AI-system is capable of making decisions on it’s own, however it is not able to execute actions that do not correspond with what it is programmed to do. As Brad Templeton put it once: ‘A robot will be truly autonomous when you instruct it to go to work and it decides to go to the beach instead’. To me, it does not seem like we will reach this point anytime soon.

Leave a Reply

Your email address will not be published. Required fields are marked *