The Future of Warfare: Robot Killers?

28

September

2017

5/5 (41)

For some years now, pre-programmed defence systems have been widely used in warfare. These defence systems are programmed to work in certain, predetermined conditions only (NRC, 2017). As opposed to defence systems that only react on threats, more recently developed autonomous weapons are able to identify and exterminate (human) targets. They can be sent in to unknown territory and learn while on the battle field, without any form of human intervention.

Critics claim that the use of autonomous weapons would escalate the scale of conflicts. The weapons could be hacked to behave undesirably by for example terrorists. Moreover, if the software does accidentally hit a civilian, it is unclear who should be held responsible for the mistake (NRC, 2017). Lastly, human rights organisations think that the barrier of starting a war will be lower for countries with autonomous weapons, since they do not have to fear human losses on their side. Proponents however, claim that the technology could reduce battlefield casualties and discriminate more effectively between civilians and combatants. At the moment no country seems willing to stagnate development of the technology, since being overtaken by other countries would expose them to risks.

In August 2017 leaders in the field of AI and robotics signed a letter urging the United Nations to illegalize the use of lethal autonomous weapons in warfare, claiming that these weapons would cause a ‘third revolution in warfare’, which would ‘equal the invention of gunpowder’ (The Verge, 2017). The petition is signed by over 116 leaders of companies in the field of AI and robotics. However, producers of autonomous weapons are not considering ceasing production. Especially in the US, who currently have a competitive advantage in the market, illegalization of these weapons does not seem realistic in the near future (NRC, 2017).

Let me know your thoughts on this matter below. Do you think that it is important to invest in autonomous weapons for defence purposes and keep up with other nations, or do you think we should cease production?

References
NRC. (n.d.) Niet Elke Killer Robot is een Bedreiging. Retrieved September 28, 2017 from https://www.nrc.nl/nieuws/2017/08/22/niet-elke-killer-robot-is-een-bedreiging-12615666-a1570642
NRC. (n.d.) Landen en wapenproducenten negeren roep om verzet ‘killer robots’. Retrieved September 28, 2017 from https://www.nrc.nl/nieuws/2017/08/21/roep-om-verzet-tegen-killer-robots-bestaat-al-langer-maar-landen-en-wapenproducenten-geven-geen-gehoor-12615880-a1570592?utm_source=NRC&utm_medium=related&utm_campaign=related2
The Verge, James Vincent. Elon Musk and AI leaders call for a ban on killer robots. Retrieved September 28, 2017 from https://www.theverge.com/2017/8/21/16177828/killer-robots-ban-elon-musk-un-petition

Please rate this

12 thoughts on “The Future of Warfare: Robot Killers?”

  1. I didn’t know artificial intelligence advancements were this far in the weapon industry. but it is not surprising.
    I think lethal autonomous weapons should be illegalized. For the Netherlands, I think investing in artificial intelligence defense systems could be good for the safety of the country. But I think that there is a difference between using robots and artificial intelligence for defense purposes and sending robots into the field to actively attack the opponent during a war. As you said, this lowers the barrier to start a war since there is no need to fear of casualties on your side. And a war with robots that have artificial intelligence can have disastrous consequences.

    I think that defending yourself with artificial intelligence systems that can detect cyber attacks and terrorism for example, should be invested. But autonomous weapons that can attack, and can also be hacked, is a threat to mankind.

    1. Hi Urscha,

      I couldn’t agree with you more. However, l think there could be a real danger in letter people defend for example their country boundaries by AI systems. Allowing countries to do so blurs the boundaries, as it may be unclear when something should be categorised as ‘defending’ or retaliating. By doing so you could potentially give countries the opportunity of starting a political game, in which they claim to be defending while actually provoking. This is also one of the reasons why people think its important to act now, since once you start allowing countries to use killer robots it will be difficult to turn around.

  2. Dear Anna,
    Very interesting article. Especially considering the timing, seeing the political situation in both the US and South Korea.. I saw that some of the articles are from August, do you know whether there is an update regarding the status of the ban of autonomous killing weapons?
    I’d be very interested in hearing your response!
    Kind regards,
    Shila

    1. Dear Shila,

      I’m afraid there is no actual update. The ban of lethal weapons is still high on the agenda of for example the UN, but no actual action has been undertaken. In the meantime I did find out however, that the letter sent this August was actually a follow up of the 2015 ‘anti killer robots’ UN letter. This second letter tries to push the UN to fight for this issue as it highlights two things. First, they demonstrated that the industry putting AI and robotics into our lives supports the concerns of the research community (that signed the first letter) and secondly they wanted to fuel the talks at the UN.

      Shortly before this letter was sent, the UN was going to meet about the issue, however the meeting was postponed. The purpose of this letter was hence also to let the public know about this postponement.

      I hope this answers your questions!

      Best,
      Anna

      Source: Ackerman, E. (2017). Industry Urges United Nations to Ban Lethal Autonomous Weapons in New Open Letter [blog post], accessed via: https://spectrum.ieee.org/automaton/robotics/military-robots/industry-urges-united-nations-to-ban-lethal-autonomous-weapons-in-new-open-letter

  3. Hi Anna,

    I saw your post and it reminded me of a TED Talk video I saw a while ago. What you talk about is basically where we should limit the reach of robotics to make human decisions. The video highlights some very important points of general AI development on this matter, which in my opinion could be connected to Robot killers. It talks about the power of AI to destroy civilisation as we know it. If you think about it, human development has been about improving year after year after year. Stopping these improvements is not within our nature. However, if we keep following this track we will get to a point where machines are smarter than we are, at which points the machines will start to improve themselves (also called an intelligence explosion). Think about the catastrophic effect this could have..

    I think you’ll definitely find the video interesting!

    (video link: https://www.youtube.com/watch?v=8nt3edWLgIg)

    Kind regards,
    Sebastiaan

    1. Dear Sebastian,

      Great video! I actually recognize some of the things he mentions in the TedTalk, being super excited about some of the developments while actually if you think about it in human terms and in terms of the possibe impact of it there is quite little to be excited about.

      Thanks for sharing! 🙂

      Best,
      Anna

  4. Hi Anna,

    To be honest, I am a bit scared by these developments. With artificial intelligence and machine learning in particular, robots will constantly be optimizing their decisions. However, as pointed out by the paper of Brynjolfsson and McAfee, there are a few gigantic risks that have to be taken into account. The biggest risk in my opinion, is the fact that the neural network systems of these killer robots will make decision based on statistical truths rather than logic rules. They will thus optimize their tasks without assessing the rationale behind it. They do not have a conscience. Furthermore, the killer robots are black boxes. We do not know based on what information the robots will make a decision. As a result the robots could be biased (for instance racist) without us knowing it. The best example of humans not understanding what is going on inside robots brains is this example of Facebook (https://www.ad.nl/wetenschap/robots-alice-en-bob-beginnen-geheimtaal-experiment-stilgelegd~ac939eec/). Their robots started communicating without humans understanding it. What would we do when this also happened to killer robots? Because of this, I think we should refrain from using killer robots in the future.

    1. Dear Marvin,

      Very thoughtful comment, thanks! I hadn’t actually heard of this happening before. I can definitately imagine the consequences this could have when something like this would happen with autonomous weapons. Luckily for now the communication between robots at the moment is only possible when initiated by humans, but imagine if in a couple of years time, they could start communicating without humans knowing..

  5. Hi Anna,

    I agree with the comment of Marvin, these developments also scare me. As we learned in the first lecture, machines now also know more than they can tell. Despite the risk of hacking, which you already mentioned, what if they develop hidden biases? As noted, it is difficult to correct errors in these Artificial Intelligence powered machines, and we also cannot verify all their actions as they are becoming smarter and smarter.

    Even though I do agree that technology could lead to a third world war (https://www.theguardian.com/technology/2017/sep/04/elon-musk-ai-third-world-war-vladimir-putin), I am against using autonomous weapons and certainly hope this war will not become reality.

    1. Hi Danielle,
      I completely agree.. There’s actually an example of robots (in a very different context) and their hidden biases, for example not knowing about social norms and values. This example shows that within 24 hours after the launch of a Microsoft robot on Twitter it started making racist comments, as it was only repeating and agreeing with what the general feeling of the humans it was communicating with was.

      https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

  6. Hello Anna,

    I saw your post and thought you might find this article interesting. It shows any updates on the petition against robot killers: https://www.stopkillerrobots.org/ . I think it would be usefull for you to regularly check this website if you are considering to extand your investigation since it includes a exact timeline as well.

    I wish you the best of luck!

    Regards,

    Max

    1. Hi Max,

      I’m sorry for the late reply (website was offline) however thanks for the link you provided! I’ll have a look at the website.

      Kind regards,
      Anna

Leave a Reply

Your email address will not be published. Required fields are marked *