The Future of Artificial Intelligence

3

October

2017

5/5 (1)

Today artificial intelligence (AI) is receiving more and more attention from a company’s perspective, as it helps organizations doing things more efficiently as well as from a customer’s perspective, because people worry and fear about the capabilities of machines with artificial intelligence. This can be traced back by an old human memory of the Frankenstein’s monster in the 1960s. People fear about ultra-intelligence which means that a machine can far exceed every human intellectual activity whatsoever. As Floridi (2017) states that ‘’because the design of the machines is one of these intellectual activities an ultra-intelligent machine could design an even better machine’’.

Because of the threat of machines taking over human activities and becoming ‘’evil’’, it is very important that when, developing artificial intelligence or machine learning, the objective x is very well defined and includes all you care about. Nick Bostrom gave a good example of this in one of his TED videos. He brings up an old myth of King Midas (see figure 2). The King wishes that that everything he touches turns in to gold. However, as Bostrom says in the video ‘’he touches his daughter, she turns into gold. He touches his food, it turns into gold’’. This is not just a metaphor of greed but also shows what happens if you create a powerful optimization process with an ill-thought-out or badly specified goal (Bostrom, 2015).

Midas_gold2

Figure 2: King Midas and his daughter 

In my opinion, we should definitely be careful about artificial intelligence and the way we control it. Important here is to put our main focus on defining the goal of what a machine should do and how to do it. Without putting our own values at risk. In addition, we should find a way of controlling the AI and always understand their motivations of outcomes.

References

Bostrom, N. (2015) What happens when our computers get smarter than we are?

https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are, 2 October 2017.

Floridi, L. (2017) Should we be worried about AI?

http://www.sciencefocus.com/article/future/should-we-be-worried-about-ai, 2 October 2017.

Please rate this

1 thought on “The Future of Artificial Intelligence”

  1. Hi Rutger,

    Nice post with some interesting points! Bostrom is great on this topic. I have written about goal congruence and AI as well, and after your Midas example I was left wondering: Should we integrate ethics into future AI, or should we trust the deductive powers of emotionless algorithms to do the right thing?

    A lot of decisions in the human brain are made very rapidly, often within a couple of second. Kahneman’s book ‘Thinking Fast and Slow’ (2011) goes into detail about this. Framing and even a bit of a stomach ache could influence our ethical decisions daily. Kahneman even talks about judges giving more severe sentences to criminals right before lunchtime, when they are a bit cranky.

    In my opinion AI is the opportunity to create more ethical consistency than a human could produce, so we should NOT pursue an AI copy of the human brain.

    Would you agree with this or do you think there is merit in modelling our logical flaws in an AI?

    Kind regards,
    Bastiaan

Leave a Reply

Your email address will not be published. Required fields are marked *