An Army That Never Sleeps

13

September

2018

No ratings yet.

In this digital age humans give more and more manual tasks away. We programmed the computer to automatically execute certain duties as it provides a higher level of efficiency and accuracy. However, how far should we allow artificial intelligence to take over human tasks?

1.5 years ago a bricklaying robot called SAM, developed by a New York construction company, can layer 300 to 400 bricks per hour, whereas a stone mason lays 60 to 75 bricks per hour (Javelosa & Reedy, 2017). In the future our houses may actually be built by robots as they perform the task 5 times faster. This is a great illustration of how robots are capable of providing services and goods quicker than humans in this hustle world. However, several years ago people wanted to develop something more advanced than drones. They might have gotten some inspirations from movies like Terminator or games like Detroit become human which on a side note is a really good PlayStation game, and came up with killer robots. Killer robots are autonomous weapon systems that could target and attack an object or person without any human control (BBC, 2018). According to Cambridge’s definition of ‘autonomous’, it refers to having the power to make decision independently (Cambridge Dictionary, N.A.). This implies that killer robots could actually kill a person without command. Luckily, these robots do not exist, not yet. But their precursors do. For example, semi-autonomous robots are already patrolling at the South-North Korea border which can be switched to autonomous mode (Wakefield, 2018).

This week the European Parliament held a meeting about these killer robots (Nu.nl, 2018), and while there are a lot of oppositions in the EU towards the killer robot and supporters of banning the robot, several countries including Korea, China, England, America and Russia are interested in the idea of killer robots and want to explore the possibilities of autonomous weapons. Imagine that these killer robots actually exist, isn’t it complicated to point out whose responsibility it is when an incident occurs? And isn’t it difficult to say that these robots can separate soldiers and citizens well during a war? Lastly, shouldn’t we just ban these killer robots completely because humans should never give the decision about life and death away to an emotionless robot.

 

References:
BBC.com (12 September 2018). MEPs vote to ban killer robots on battlefield. Retrieved on 13-09-2018 from https://www.bbc.com/news/technology-45497617.

Cambridge Dictionary (NA). autonomous. Retrieved on 13-09-2018 from https://dictionary.cambridge.org/dictionary/english/autonomous.

Javelosa, J. & Reedy, C. (7 April 2017). Brick By Brick. Retrieved on 13-09-2018 from https://futurism.com/this-robot-works-500-faster-than-humans-and-it-puts-thousands-of-jobs-at-risk/.

Nu.nl (12 september 2018). Europees parlement wil killer robots verbieden. Retrieved on 13-09-2018 from https://www.nu.nl/tech/5459102/europees-parlement-wil-killer-robots-verbieden.html.

Wakefield, J. (5 April 2018). South Korean university boycotted over ‘killer robots’. Retrieved on 13-09-2018 from https://www.bbc.co.uk/news/technology-43653648.

Please rate this

2 thoughts on “An Army That Never Sleeps”

  1. One of the potential benefits of these killer robots that I’m seeing is that they could decrease the number of war victims. The way these robots are programmed could mean that innocent citizens could be spared as robots can be programmed to separate citizen from fighters. However, this is also where I see a big problem/ danger. What if such a robot was programmed and able to separate the enemy based on uniforms they are wearing. Could such a robot separate the enemy that is still fighting and the enemy that is walking with their hands up to surrender? If not, again people will be unnecessarily killed. But what if the robot is programmed to not shoot the enemy walking with their hands up, then the enemy could walk with their hands up towards the robot with all its consequences.
    The ethical aspect in my opinion is not just a difficult issue whether or not robots should be allowed to make the decision about life or death. Humans should also be included in the discussion in a way that if these robots are out in the field and humans still need to decide whether or not to pull the trigger. You could argue that looking at a screen at a safe distance you create some sort of virtual war which, almost similar to videogames in a way, could influence the decision to pull the trigger more easily than if one was standing in the field.

  2. Thank you Kelly for your interesting blog. I agree with you that machines cannot decide about life or death, because for that kind of decisions we need human feelings. In my opinion “killer” robots can be used as defence, for example as border surveillance or protects against rocket attacks (Hijink, 2017). Nowadays (nuclear) weapons are more dangers then ever and I think robots can help or even out perform humans at protecting countries.

    https://www.nrc.nl/nieuws/2017/08/22/niet-elke-killer-robot-is-een-bedreiging-12615666-a1570642

Leave a Reply

Your email address will not be published. Required fields are marked *