Can Ethics Catch Up To The Onward March Of Artificial Intelligence?

11

September

2018

5/5 (3)

Artificial intelligence is currently experiencing great technological advancements, but can the field of ethics keep up with it before it’s too late? Can we prevent disaster and enter a new golden age?

 

Dear god I desperately hope so!

 

Isaac Asimov devised the Three Laws of Robotics in his 1942 short story The Runaround; laws which governed robot behavior as a safety feature for mankind. Much of his following work on the subject of robots was about testing the boundaries of his three laws to see where they would break down, or create unanticipated behavior. His work implies that there are no set of rules that can account for every possible circumstance.1

1942 was a long time ago, when artificial intelligence was but a twinkle in the eyes of computer scientists, programmers and nerds. While we still have a ways to go before we achieve singularity,2 the point where AI achieves greater general intelligence than humans, we can’t deny that AI research and application have come a long way. Programs like IBM Watson, a healthcare AI that successfully diagnosed leukemia in a patient when doctors couldn’t3 and beat opponents on the game show Jeopardy!,4 and the onset of self-driving cars reinforce that fact.

However, Nick Bostrom argues in his paper “Ethical Issues in Advanced Artificial Intelligence” that artificial intelligence has the capability to bring about human extinction. He claims that a general super-intelligence would be capable of independent initiative as an autonomous agent. It would be up to the designers of the super-intelligence to code for ethical and moral motivations to prevent unintended consequences.Sadly, the sheer complexity and variety of human beliefs and values makes it very difficult to make AI’s motivations human-friendly.6

Unless we can come up with a near-perfect ethical theory before AI’s reach singularity, an AI’s decisions could allow for many potentially harmful scenarios that technically adhere to the given ethical framework but disregard common sense.

Many of the large tech companies have teamed up to address the issue by working together with academia to do research and organize discussions, but it is still uncertain whether they’ll achieve their goals before somebody lets the genie out of the bottle. I remain hopeful, but just in case:

 

I, for one, welcome our new robot overlords.

 

 

 

1 Asimov, Isaac (2008). I, Robot. New York: Bantam. ISBN 0-553-38256-X.

2 Scientists Worry Machines May Outsmart Man By JOHN MARKOFF, NY Times, July 26, 2009.

3 Ng, Alfred (7 August 2016). “IBM’s Watson gives proper diagnosis after doctors were stumped”NY Daily NewsArchived from the original on 22 September 2017.

4 Markoff, John (16 February 2011). “On ‘Jeopardy!’ Watson Win Is All but Trivial”The New York TimesArchived from the original on 22 September 2017.

5 Bostrom, Nick. 2003. “Ethical Issues in Advanced Artificial Intelligence”. In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.

6 Muehlhauser, Luke, and Louie Helm. 2012. “Intelligence Explosion and Machine Ethics”. In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.

 

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *