People nowadays love to discuss the endless possibilities offered to us by artificial intelligence (AI) or the moral and ethical dilemmas it presents us with. Once seen as simply an interesting topic for science fiction movies (e.g. Blade Runner, Ex Machina), recent developments in AI have led to the belief that superintelligence may not be as far away as we thought. In an article by the Future of Life Institute (a research and outreach organization working to ‘mitigate existential risks facing humanity’), it is stated that most AI researchers at the AI Safety conference in Puerto Rico in 2015 are convinced that general AI – the kind that can learn to outperform humans at every cognitive task – can be achieved before 2060 (Future of Life Institute, 2017). Essentially, the conflicting views and the ever-present uncertainty indicate that we never really know what AI will become.
A recent article published by the MIT Technology Review offers an interesting take on the subject of AI. Popular belief is that the intelligence is developing so quickly that ‘robots will take half of today’s jobs in 10 or 20 years’ (Brooks, 2017). The author, Rodney Brooks, believes such claims are absolutely ludicrous. He sees the hysteria surrounding artificial intelligence as grossly exaggerated. His article, The Seven Deadly Sins of AI Predictions, instead outlines what negative influence the predictions and discussions around AI could have on our future. He brings into view the following points:
- Overestimating and underestimating
Brooks introduces Amara’s Law, which states, “We tend to overestimate the effect of a technology in the short run, and underestimate the effect in the long run” (Brooks, 2017). Big promises of huge breakthroughs that fail to deliver in the desired timeframe lead to more hysteria over AI. In the long run, we often say that general AI is centuries away. However, this could be an underestimation, as strides are being made towards AI regardless of the failure of short-term goals. Brooks uses the next points to describe this worry further.
- Imagining magic
The author states that there is a certain problem with the technology we imagine. That is, if it is too far away from what we understand today, then we are unsure of limitations. People often see future technology as being too ‘magical’. Of course, as Brooks explains, nothing in the universe is without limit. Certain developments may be very far away, but that does not mean that we will not achieve them. In imagining AI as something that omniscient and powerful beyond comprehension, we simply add to the exaggerated claims of AI’s potential.
- Performance versus competence
Today’s AI systems are still very narrow. While we often expect them to be as competent as humans in understanding context, they are not.
- Suitcase words
Suitcase words are words that carry a variety of meanings. When we describe AI systems as having a ‘learning’ capability, the description can signify different experiences. Brooks uses the example that learning to write code is significantly different to learning how to navigate a city, or that learning the tune of a song is different to learning to eat with chopsticks.
The suitcase words are leading people to believe that AI systems are able to absorb knowledge as humans do. It warps our understanding of the state of AI, and can make for (currently) unrealistic expectation.
- Exponentials
Moore’s Law suggests that computers grow exponentially ‘on a clockwork-like schedule’ (Brooks, 2017). It indicates that microchip performance would double every year. We have come to expect the same from AI systems. Due to deep learning success (which took 30 years), people believe the increases in AI system performance will continue to increase exponentially. However, deep learning was an isolated event, so there is no evidence to show that we should expect these developments.
- Hollywood scenarios
People love to imagine AI systems terrorizing humankind as in sci-fi movies. Superintelligence, however, will not suddenly come to attack. Machine development is an iterative process that will slowly evolve over time.
- Speed of deployment
The marginal cost of deploying a new set of code is next to zero, which is why software developments are so rapid. This is not, however, applicable to hardware, which requires significant capital investments. For this reason, Brooks states that the hardware aspect of AI will take far longer than we expect to be embedded in daily life.
Rodney Brooks raises interesting arguments against the popular idea that we should be wary or afraid of AI developments. It brings to light reasons to be skeptical of the many statistics regarding the disappearance of jobs or the substitution of daily processes. Personally, I lean towards siding with Brooks. I am confident that AI will become an integral part of our lives, but I doubt that it will happen at the speed that people expect and to the extent that people expect so quickly.
So, what do you think? Do you agree with Rodney Brooks that the hysteria surrounding AI is severely over exaggerated, or do you believe AI systems will evolve to the point of trying to kill us in the near future?
References:
Brooks, R. (2017). The Seven Deadly Sins of AI Predictions. [online] MIT Technology Review. Available at: https://www.technologyreview.com/s/609048/the-seven-deadly-sins-of-ai-predictions/ [Accessed 22 Oct. 2017].
Future of Life Institute. (2017). Benefits & Risks of Artificial Intelligence. [online] Available at: https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/ [Accessed 22 Oct. 2017].