A (super)human in the loop

10

September

2019

5/5 (9)

Iron man and Vasili Arkhipov. Only one of them is worldfamous. Also, only one of them is a brave man with nerves of steel, who has saved the world from, arguably, the most devastating war in human history. Vasili Arkhipov was the only Soviet officer, out of three, who decided not to launch a nuclear torpedo to the U.S. navy (for a short backstory, go to the end). Many people believe that this decision solely prevented World War III in 1962. What makes it even more frightening is that this decision would have been based on wrong assumptions. In short, this would have been a catastrophic mistake.

Making mistakes is part of being human. Throughout history, we have relied on a trial-and-error as our primary development mechanism. Without this mechanism, we might not have established the current level of technology. But, as our technologies become more powerful and autonomous, so does the need for them to be error-free. Currently, technologies have caused so few accidents that they are outweighed by their benefits. However, as we continue to develop more powerful and autonomous technologies, we are also getting closer to the point that one accident can outweigh the benefits.

The rapid progress in the AI field makes thinking about the possible consequences even more important. Computer systems are known to crash and have bugs. AI systems, however, have to be robust for us to further integrate them into our world. Although making mistakes is part of our human DNA, making brave choices is also within all of us. What if the Soviet submarine in 1962 was autonomous and there was no human in the loop? What if an independent AI system acts on the wrong assumption and crashes the stock market? As our generation laying the foundation for the answers to these questions, it is essential to have conservations about these issues.

I believe that AI will become a crucial part of our future life, and hope that we as humans will always stay in the loop to be able to make brave choices such as Vasili Arkhipov.

Background story:
During the Cuban missile crisis in 1962, a Soviet submarine entered international waters near outside the quarantine area of the U.S. Because the Americans were unable to make contact with the Soviet submarine, they decided to use small depth charges to force the submarine to surface. However, the U.S. navy was not aware of two crucial details. Firstly, the submarine was experiencing technical issues which led to a rise in the temperature past 45 degrees onboard. Secondly, the submarine crew was authorized to launch the nuclear torpedo that they carried without permission of the headquarters in Moscow. As the crew in the submarine was unaware of whether the potential World War III had already started, the majority of the team assumed the depth charges to be war declarations. To launch the nuclear torpedo, permission has to be granted of all three onboard officers. The only officer to deny the launch was Vasili Arkhipov.

 

Sources:

Ortega, P. (2019). Building safe artificial intelligence: specification, robustness, and assurance. [online] Medium. Available at: https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1 [Accessed 10 Sep. 2019].
Tegmark, M. (2018). Life 3.0. Amsterdam: Maven Publishing, pp.290–323.

YouTube. (2019). Secrets of the Dead: The Man Who Saved the World.. [online] Available at: https://www.youtube.com/watch?v=4VPY2SgyG5w [Accessed 10 Sep. 2019].

Please rate this

1 thought on “A (super)human in the loop”

  1. Super interesting article. Ethical questions concerning AI are profound. I think in these cases people generally assume the worst from the start. Some might even go as far as hypothesise that in order to keep up with AI in the future we need to become AI because it will want to wipe us out. Sadly governments are not really keeping up with the work on legislation to set some sort of frame for developing AI. And this will likely continue until its too late. But even if there is some law prohibiting the development and use of AI for some use cases I think its unrealistic to think that nobody will do it. Its too easy to just do it, it’s not like a nuclear bomb in which you need special materials etc. You could just program it on you laptop basically.
    Maybe connecting us with an AI is the only choice after all? If you can’t beat’em join’em?

Leave a Reply

Your email address will not be published. Required fields are marked *