Kill or be killed? Artificial Intelligence in military

19

September

2018

No ratings yet.

Defense Advanced Research Projects Agency (DARPA) announced in September 2018 a 2 billion dollar investment in Artificial Intelligence, called the “AI Next” campaign (Nu.nl, 2018). DARPA is an agency of the United States Department of Defense, responsible for the development of emerging technologies for use by the military (Wikipedia, sd). The purpose of this investment is to investigate how a machine could learn human skills, such as communication and problem solving abilities. Nowadays, Artificial Intelligence is used for speech recognition, autonomously operating cars, and intelligent routing in content delivery networks but also for military simulations (Wikipedia, sd). The latter one, however, will now be expanded with machines that enhance humans warfighting capabilities.
Although AI technology has the potential to bring positive effects to society, the appliance of AI in the army could also trigger further development of lethal autonomous weapon systems. Moreover, AI could be the enabler of a war where no human command is needed. In other words, greater levels of intelligence could change the way we fight war forever. The movie ‘Eye in the sky’ is one example of how the use of technology, in this case armed drones used as weapons of war, could raise a range of moral dilemmas with regards to a remote war (Baker, 2016). While the technology of a drone is not comparable to artificial intelligence, it makes you think of the possible consequences of the increasingly use of advanced technology in the army.
The chosen article is not only interesting due to the appliance of AI for military purposes, but also because it concerns an intensive risk to humanity. In the future, the power of government bodies will become dependent on the state of the technology used in military. Elon Musk openly spoke about his concern that military development in AI could trigger a World War III. In order to prevent an arms race, 26 countries of the United Nations have explicitly endorsed the call for a ban on lethal autonomous weapons systems. Also, several leaders and researchers in the field of technology signed this pledge (Nu.nl, 2018).
Despite the existence of this agreement, one could argue that the discussion concerns a double effect for the government with regards to military decisions. On the one hand, you do not want to part of a development that could potentially contribute to tremendous conflicts. On the other hand, if you do not invest in AI and other advanced technology there is a risk of losing power compared to other parties.

What are your thoughts regarding the use of AI in military?

Do you agree with Elon Musk that AI could trigger a World War III?

References:
Baker, D.-P. (2016, March 31). Eye in the Sky and the moral dilemmas of modern warfare. Opgehaald van The Conversation: https://theconversation.com/eye-in-the-sky-and-the-moral-dilemmas-of-modern-warfare-56989
Nu.nl. (2018, September 07). Defensie VS investeert 2 miljard dollar in kunstmatige intelligentie. Opgehaald van Nu.nl: https://www.nu.nl/internet/5452514/defensie-vs-investeert-2-miljard-dollar-in-kunstmatige-intelligentie.html
Nu.nl. (2018, July 18). Techleiders beloven geen wapens met kunstmatige intelligentie te ontwikkelen. Opgehaald van Nu.nl: https://www.nu.nl/internet/5370248/techleiders-beloven-geen-wapens-met-kunstmatige-intelligentie-ontwikkelen.html
Wikipedia. (sd). Artificial Intelligence. Opgehaald van Wikipedia: https://en.wikipedia.org/wiki/Artificial_intelligence
Wikipedia. (sd). DARPA. Opgehaald van Wikipedia: https://en.wikipedia.org/wiki/DARPA

Please rate this

2 thoughts on “Kill or be killed? Artificial Intelligence in military”

  1. Interesting topic! I think that the development of autonomous weapons systems is a very dangerous activity that should be monitored carefully. Also, I agree with Elon Musk and I think that we should consider the effects that a WW3 with AI weapons will have on the world order as we know it.

  2. Catchy title right there 🙂
    With AI in the military, the spectrum of concerns are not only with the technology itself, it is also a problem within the realm of IR. The paradox posed here is a classical debate in international relations, defense v. offense, assuming politics is a zero-sum game.
    I do not think we should overreacting to AI in the military, as any other cut-edgy technologies before it, they have a capability of destruction on a large scale. But again, there are other aspects to consider, such as due process of states, international communities, treaties.
    Don’t forget we have developed nuclear weapons a long time ago. WW III has not taken place, yet.

Leave a Reply

Your email address will not be published. Required fields are marked *