Are we over exaggerating AI?

22

October

2017

No ratings yet.

People nowadays love to discuss the endless possibilities offered to us by artificial intelligence (AI) or the moral and ethical dilemmas it presents us with. Once seen as simply an interesting topic for science fiction movies (e.g. Blade Runner, Ex Machina), recent developments in AI have led to the belief that superintelligence may not be as far away as we thought. In an article by the Future of Life Institute (a research and outreach organization working to ‘mitigate existential risks facing humanity’), it is stated that most AI researchers at the AI Safety conference in Puerto Rico in 2015 are convinced that general AI – the kind that can learn to outperform humans at every cognitive task – can be achieved before 2060 (Future of Life Institute, 2017). Essentially, the conflicting views and the ever-present uncertainty indicate that we never really know what AI will become.

 

A recent article published by the MIT Technology Review offers an interesting take on the subject of AI. Popular belief is that the intelligence is developing so quickly that ‘robots will take half of today’s jobs in 10 or 20 years’ (Brooks, 2017). The author, Rodney Brooks, believes such claims are absolutely ludicrous. He sees the hysteria surrounding artificial intelligence as grossly exaggerated. His article, The Seven Deadly Sins of AI Predictions, instead outlines what negative influence the predictions and discussions around AI could have on our future. He brings into view the following points:

 

  1. Overestimating and underestimating

 

Brooks introduces Amara’s Law, which states, “We tend to overestimate the effect of a technology in the short run, and underestimate the effect in the long run” (Brooks, 2017). Big promises of huge breakthroughs that fail to deliver in the desired timeframe lead to more hysteria over AI. In the long run, we often say that general AI is centuries away. However, this could be an underestimation, as strides are being made towards AI regardless of the failure of short-term goals. Brooks uses the next points to describe this worry further.

 

  1. Imagining magic

 

The author states that there is a certain problem with the technology we imagine. That is, if it is too far away from what we understand today, then we are unsure of limitations. People often see future technology as being too ‘magical’. Of course, as Brooks explains, nothing in the universe is without limit. Certain developments may be very far away, but that does not mean that we will not achieve them. In imagining AI as something that omniscient and powerful beyond comprehension, we simply add to the exaggerated claims of AI’s potential.

 

  1. Performance versus competence

 

Today’s AI systems are still very narrow. While we often expect them to be as competent as humans in understanding context, they are not.

 

  1. Suitcase words

 

Suitcase words are words that carry a variety of meanings. When we describe AI systems as having a ‘learning’ capability, the description can signify different experiences. Brooks uses the example that learning to write code is significantly different to learning how to navigate a city, or that learning the tune of a song is different to learning to eat with chopsticks.

 

The suitcase words are leading people to believe that AI systems are able to absorb knowledge as humans do. It warps our understanding of the state of AI, and can make for (currently) unrealistic expectation.

 

  1. Exponentials

 

Moore’s Law suggests that computers grow exponentially ‘on a clockwork-like schedule’ (Brooks, 2017). It indicates that microchip performance would double every year. We have come to expect the same from AI systems. Due to deep learning success (which took 30 years), people believe the increases in AI system performance will continue to increase exponentially. However, deep learning was an isolated event, so there is no evidence to show that we should expect these developments.

 

  1. Hollywood scenarios

 

People love to imagine AI systems terrorizing humankind as in sci-fi movies. Superintelligence, however, will not suddenly come to attack. Machine development is an iterative process that will slowly evolve over time.

 

  1. Speed of deployment

 

The marginal cost of deploying a new set of code is next to zero, which is why software developments are so rapid. This is not, however, applicable to hardware, which requires significant capital investments. For this reason, Brooks states that the hardware aspect of AI will take far longer than we expect to be embedded in daily life.

 

Rodney Brooks raises interesting arguments against the popular idea that we should be wary or afraid of AI developments. It brings to light reasons to be skeptical of the many statistics regarding the disappearance of jobs or the substitution of daily processes. Personally, I lean towards siding with Brooks. I am confident that AI will become an integral part of our lives, but I doubt that it will happen at the speed that people expect and to the extent that people expect so quickly.

 

So, what do you think? Do you agree with Rodney Brooks that the hysteria surrounding AI is severely over exaggerated, or do you believe AI systems will evolve to the point of trying to kill us in the near future?

 

 

References:

 

Brooks, R. (2017). The Seven Deadly Sins of AI Predictions. [online] MIT Technology Review. Available at: https://www.technologyreview.com/s/609048/the-seven-deadly-sins-of-ai-predictions/ [Accessed 22 Oct. 2017].

Future of Life Institute. (2017). Benefits & Risks of Artificial Intelligence. [online] Available at: https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/ [Accessed 22 Oct. 2017].

Please rate this

4 thoughts on “Are we over exaggerating AI?”

  1. Recently an Artificial Intelligence project of Facebook was cancelled, after they discovered that the 2 robots who were communicating with each other, created an alteration to the english language. The problem was, that only the 2 robots could understand the created language. I think this example shows that we have to be aware of the risks involved when using Artificial Intelligence.

    http://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html

  2. Hi Floris,

    interesting post about AI. I agree with you that AI is kinda exaggerated in some sense. As you stated we shouldn’t think that Moore’s Law is applicable to AI, it will still take a lot of time before we can effectively use AI on a large scale in daily life. This is also because of the hardware costs, as you stated, are still way up there and requires a big investment to set up everything for AI. I do think if advancements are kept being made we need to look for all the opportunities AI gives us. I don’t think AI will take over all our jobs as a lot of jobs still rely on human judgement and insight and people like to have an actual human to interact with instead of a machine. I do believe that AI has the potential to make processes in daily life a lot more efficient, so AI should be regarded as a helping hand in making the world more efficient. Next to this I think AI also still has a long way to go on the regulatory aspect. I don’t think people nowadays understand what AI can do and how to formulate laws and rules on AI.

  3. Yes, I agree. AI is progressing rapidly, according to the recent news Google has made a recent achievement in which Alfa zero beating Alfa go. Also, AI has always been a concern, as its potential danger to humanity and civilization. That’s why Elon Musk has called for the regulation of AI development.
    As we have talked about in class, the development of AI or ML is still at an early phrase. The applicability of AI is quite narrow. As ML can only be trained to do specific tasks and their knowledge does not generalize. Which mean, nowadays AI can outperform human in certain areas but the self-aware AI which can reason, learn, feel, express and understand emotion is still in our imagination.
    Facing the future benefits and uncertainties AI would brought, I think what we can do now is to improve the chance of reaping the benefits and avoiding the risks.

  4. Very interesting article Floris! I also agree with Brooks. He brings forward a lot of sound arguments that we should not be wary of AI. I can especially relate to the suitcase argument because often in the media AI is indeed described as if AI systems can absorb knowledge like humans do.

    I do however understand the sentiment of people that AI can be overwhelming. The thing that impresses me the most about AI is that it is integrated in so many systems around use without use noticing or realizing it. It, for example, is integrated in Facebook so that Facebook can suggest which friends should be tagged in the picture you uploaded. But also apps like Uber are based on an AI system that match you with other passengers to minimize the detour. Another device that we use daily is our phone. I have a phone that can be unlocked by fingerprint scanning but Apple is already working on a ML system with which you can unlock your phone through Face ID.

    If some people find Face ID recognition already quite futuristic, I can imagine that Heart ID may sound even scarier to them. Scientists have created a new system that can recognize people from the shape of their hearts. Some speculate that this can replace the fingerprint scan that is currently used to lock phones. (Gerards 2017)

    The researchers make use of a radar system that can measure the shape and the size of the heart. This is very foolproof because of the fact that every person has a unique heart that does not change over time with the exception of certain diseases. One could argue that this will be more reliable than face recognition because of the fact that facial features may change. For example as a result of aging or altering one can do to their face through plastic surgery. (Potter 2017)

    The system is called Cardiac Scan and can scan hearts at airports up to a distance of 30 meters and takes only eight seconds! (Potter 2017) This means that one can scan your heart from a distance to retrieve your identity without you even knowing it.

    I was wondering whether you would feel comfortable with scientists or a future phone for that matter scanning your body to identify the shape of your heart?

    Gerards (2017). Biometrisch inlogsysteem herkent mensen aan vorm van hun hart | NU – Het laatste nieuws het eerst op NU.nl. [online] Nu.nl. Available at: https://www.nu.nl/gadgets/4938032/biometrisch-inlogsysteem-herkent-mensen-vorm-van-hart.html [Accessed 23 Oct. 2017].

    Potter (2017). Goodbye, login. Hello, heart scan. – University at Buffalo. [online] Buffalo.edu. Available at: http://www.buffalo.edu/news/releases/2017/09/034.html [Accessed 23 Oct. 2017].

Leave a Reply

Your email address will not be published. Required fields are marked *