Artificial Intelligence has become a big part of our lives, and rightfully so. It is being deployed in health care to help doctors efficiently diagnose and reduce errors, it helps people make music and books and even every time you look at your phone you are enjoying the benefits of AI. This is only a short list on how it improves and in some cases even saves lives. However, as with most emerging technologies, there are also potential risks associated with the advance of AI.
While we haven’t reached the point of super intelligent machines yet, there is no denying that given the speed of current developments in the area of AI, these will come in the future. However, because the political, societal, legal, financial and regulatory issues are so complex and wide reaching, they should be looked at now in order to be prepared to safely operate among them when the time comes. However, these issues are problems for the future, even now there are already risks associated with AI.
Some of the risks which are already present know are for example autonomous weapons, a machine programmed to kill, is that really what we want? What if an adversary is able to feed disinformation to a military AI program? The consequence could be dire. Vladimir Putin said: “Artificial Intelligence is the future, not only for Russia, but for all humankind. It comes with enormous opportunities, but also threats that are difficult to predict. Whoever becomes leader in this sphere, becomes ruler of the world”. Other current risks include social manipulation, consider the accusations at the address of Cambridge Analytica of trying to sway the U.S. presidential election in 2016 using 50.000.000 Facebook profiles. The accusations might not have been proven correct, the idea of it being a possibility is a scary one. Another big risks is misalignment, it is very hard to align our goals with those of a machine and when they aren’t aligned it can have big consequences. Even a simple tasks of for example “bring me to work” without having specified the rules of the road can become a disaster.
Now as aforementioned in the beginning of this article, AI brings a lot of positive thing to our lives. The benefits do currently weigh up against the risks in my opinion. However, I do think the focus is to much on can we, can we, can we and not enough attention is paid to should we, should we, should we.
https://www.forbes.com/sites/bernardmarr/2018/11/19/is-artificial-intelligence-dangerous-6-ai-risks-everyone-should-know-about/#6ff1a1e62404
https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/confronting-the-risks-of-artificial-intelligence#
https://www.businessinsider.com/artificial-intelligence-healthcare?international=true&r=US&IR=T
Very interesting topic, Koert! I particularly agree with your last sentence. The pace of progress in AI is super fast and I think it’s important that a debate is started on how we can mitigate AI’s negative (possibly destructive) potential while allowing it to develop in a positive manner. Personally I think that the biggest risk of AI is when AI is in control of i.e. autonomous weapons or public transport and encounters a scenario that’s outside of their training data. This could potentially bring actual people in danger. Do you think it’s doable to train an AI for every scenario they could possibly face? Let me know your opinion!