“The development of full artificial intelligence could spell the end of the human race”
– Stephen Hawking
Five of the world’s biggest technology companies announced this week that they will work together in order to accelerate the development of artificial intelligence (AI) systems. Microsoft, Facebook, Google’s DeepMind, IBM, and Amazon will tackle the issues behind the safety, privacy, and collaboration between humans and AI. Moreover, Microsoft revealed yesterday that they have formed a 5,000-person engineering and research division focused on the matter.
The idea of living in a world alongside artificial intelligence seemed a reality far from our lifetime, something belonging to science fiction. However, today AI has already become part of our daily lives.
For those unfamiliar with the term “artificial intelligence”, it is defined as the theory and development of computer systems able to perform tasks that normally require human intelligence. Basically, it’s smart software that enables machines to mimic human behavior.
Much like in the movies “Terminator”, “A.I. Artificial Intelligence”, and “I, Robot”, AI is usually portrayed as robots with human-like characteristics, but in fact it encompasses all systems. Apple’s SIRI and self-driving cars are already examples of current artificial intelligence, except they are properly referred to as weak AI (or narrow AI), which is AI designed to only perform narrow tasks (e.g. internet searches or voice recognition). The long-term goal, however, is to create strong AI (or artificial general intelligence, AGI) that could outperform humans in almost every cognitive task.
This unsurprisingly raises many concerns, which is why these five companies hope to maximize AI’s potential and to ensure its benefits to the world by conducting research in the following areas:
- Ethics, fairness and inclusivity
- Transparency
- Privacy and interoperability (how AI works with people)
- Trustworthiness, reliability and robustness
Despite the obvious advantages that AGI would offer, from facilitating daily human tasks to overcoming human limitations (e.g. space exploration), perhaps one of the biggest argument is still revolved around its ethical and moral values. In sci-fi movies, the AI systems usually end up trying to dominate the humans. Although these are a bit exaggerated, Tom Dietterich (president of the Association for the Advancement of AI) explains that computers won’t just take over the world someday, unless we design them to. Which begs the question, what if one day someone does design them to do so? Additionally, Dietterich stresses that we should also never make AI systems fully autonomous (fully independent of human control) in order to avoid such problems.
Whether you are for or against AI, its rapid development is inevitable. However, will its benefits truly outweigh its consequences? Personally, I do not believe that we can live in a world where human intelligence is outperformed by superior intelligent systems without any unforeseen risks. No matter how safely designed these machines are, they could always end in up the wrong hands. It may seem a minor problem if your laptop crashes or gets hacked, but it’s more serious if an AI system takes over your car, airplane, home, or trading systems. Or worse, imagine losing control of lethal autonomous weapons, which can select and engage targets without human intervention.
“[AI] is potentially more dangerous than nukes” – Elon Musk
Related Articles:
http://www.bbc.com/news/technology-37494863
http://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
http://www.techinsider.io/autonomous-artificial-intelligence-is-the-real-threat-2015-9
http://www.toptechnews.com/article/index.php?story_id=00100015QK07
http://www.buzzle.com/articles/pros-and-cons-of-artificial-intelligence.html
http://content.wisestep.com/advantages-disadvantages-artificial-intelligence/
Wow! Very strong article.
I was not aware of the incrementation of investments towards AI, neither of the segmentation of low, medium and high AI such as AGI!
Thank you for this interesting article.
In terms of the possible “negative” outcomes that could follow, I believe that the antagonistic concept of AI depicted in I, Robot is still far from current research. Nonetheless, it should be a matter of importance to keep AI productions autonomous from humans.
I have been reading and watching a lot about this topic so I can recommend 3 amazing things to view:
1. The movie Ex Machina – a really nice one regarding Artificial Intelligence, robots and the interaction between robots and humans.
2. The documentary The Choice is Our (2016) – it can be found on Youtube and presents an ideal world where technology plays an important role on how humans will live in the future. Jacque Fresco, an old and smart guy, came up with the Venus project and tries to reshape humanity and our dependency on resources through technology.
3. Google Deepmind – https://www.youtube.com/watch?v=TnUYcTuZJpM shows us what AI is capable of and what machine learning can do.
Nice viewings! I can also recommend the movie Her (2013) to see actual AI in a different, more positive perspective. In this movie, the AI’s are not really interested in humans, or violence for that matter. I think this is quite interesting and an accurate approach on how AI’s would behave. As a developed mind, why would an AI be interested in ‘destroying’ human kind? They have no use for material goods like money or land.
In response to the post: I think that the benefits of AI will definitely outweigh its consequences. As I see it, it is only the next step after the industrial and digital revolution. Accenture estimates that, thanks to AI, labor productivity in the Netherlands in 2035 is 27% higher than a scenario without AI. By replacing traditional automation with intelligent automation and augmenting labor and capital with AI humans can focus on creativity, imagining and creating.
Source and interesting read:
https://www.accenture.com/nl-en/insight-artificial-intelligence-future-growth