Is Artificial Intelligence a Threat For Humanity?

7

October

2016

5/5 (4)

The movie “Her” is a beautiful example of how Artificial Intelligence (AI) may interfere in our future lives. For the people who haven’t seen the movie, the film follows Theodore, a man who develops a relationship with an intelligent computer operating system called Samantha. Just as in the movie, I believe AI can really add something to our lives. Everyone knows the examples of self-driving cars or robot vacuums that can make life for people easier. In the future, many more convenient applications will be developed to enhance our lives and the popularity of AI will only grow and grow.

However, many technologies have both good and bad aspects that they can be used for, and so does AI. There was a lot of commotion when people heard about so-called “killer robots”, fully autonomous weapons that are able to select and engage targets without human intervention. According to the Human Rights Watch “it is questionable that fully autonomous weapons would be capable of meeting international humanitarian law standards, including the rules of distinction, proportionality, and military necessity, while they would threaten the fundamental right to life and principle of human dignity”. Some 36% of people think the rise of AI poses a threat to the long term survival of humanity. Among those 36% are Stephen Hawking, Bill Gates, and Elon Musk. They all warn about a time when humans will lose control of AI and be enslaved or exterminated by them. Particularly the development of self-learning machines freighting these people.

Irving John Good developed in 1960 the idea of the intelligence explosion. He anticipated that self-improving machines would become as intelligent, then exponentially more intelligent, than humans. Initially, Good had a romantic view about AI, as he believed that they would save mankind by solving intractable problems, including famine, disease and war. Later on, he feared global competition would drive nations to develop superintelligence without safeguards. Eventually, he believed that this would lead to the extermination of the human race.

The crux of the problem is that we have no idea how we control super intelligent machines. Many people don’t see the threat and assume AI will be harmless. A.I. scientist Steve Omohundro did research on the nature of AI and indicated that they will develop basic drives, regardless of their job. They’ll become self-protective and seek resources to better achieve their goals. If necessary, they’ll fight us to survive, as they won’t want to be turned off. Omohundro therefore emphasizes that we have to design AI very carefully. You should expect that ethics are therefore paramount for experts in developing superintelligence. Unfortunately this is not the case, most experts are developing products instead of exploring safety and ethics. The budgets for AI are rising and are projected to a rising budget going to generate trillions of euros in economic value. Shouldn’t we spend a fraction of that budget on exploring the ethics of autonomous machines, in order to ensure the survival of the human species?

Sources:

  • https://en.wikipedia.org/wiki/Her_(film)
  • https://www.hrw.org/topic/arms/killer-robots
  • https://www.hrw.org/topic/arms/killer-robots
  • http://newsvideo.su/video/3768547
  • http://www.huffingtonpost.com/james-barrat/hawking-gates-artificial-intelligence_b_7008706.html
  • http://io9.gizmodo.com/why-a-superintelligent-machine-may-be-the-last-thing-we-1440091472

Please rate this

1 thought on “Is Artificial Intelligence a Threat For Humanity?”

  1. Thank you for your interesting post. Reading this post made me think about the movie I, Robot, which was released in 2004. The movie is about robots that become so intelligent that they start to act independently from their human creators. In 2004 the movie was welcomed with laughter about AI, this was pure Sci-Fi and could not happen. Now twelve years later more and more people are getting involved in the discussion about the dangers of AI.

    Google laid out five threats of AI (http://www.dailymail.co.uk/sciencetech/article-3654714/Forget-killer-robots-Google-identifies-five-mundane-challenges-facing-artificial-intelligence-including-annoying.html) of which one is the difficulty of avoiding negative side effects. These negative side effects could be harmful to humans. If, for example, a baby keeps puking in a room that a robot has to clean, the fastest way to deal with the problem is removing the baby from the room (or worse).

    While AI becoming more and more used, we need to keep in mind that ethics needs to be discussed every second along the way.

Leave a Reply

Your email address will not be published. Required fields are marked *