Can Morality Be Programmed Into AI Systems?

18

October

2019

No ratings yet.

For many years, experts have been warning about the unanticipated effects of general artificial intelligence (AI). For example, Elon Musk is of the opinion that AI may constitute a fundamental risk to the existence of human civilization, and Ray Kurzweil predicts that by 2029 AIs will be able to outsmart us human beings. [1]

Such scenarios have called for incorporating AI systems with a sense of ethics and morality. While general AI is still far away, morality in AI is already a widely discussed topic today (for example the trolley problem in autonomous cars). [2] [3]

So, where would we need to start in order to give machines a sense of ethics? According to Polonski, there are three ways to start designing more ethical machines [1]:

  1. Explicitly defining ethical behavior: AI researchers and ethicists should start formulating ethical values as quantifiable parameters and come up with explicit answers and decision rules to ethical dilemmas.
  2. Crowdsourcing human morality: Engineers should collect data on ethical measures by using ethical experiments (for example see http://moralmachine.mit.edu/) [4]. This data should then be used to train AI systems appropriately. Getting such data, however, might be challenging because ethical norms cannot always be standardized.
  3. Making AI systems more transparent: While we know that full algorithmic transparency is not feasible, there should be more transparency on how engineers quantified ethical values before programming them, as well as the outcomes that the AI has produced as a result of these choices. Here, guidelines implemented by policymakers could help.

However, in my opinion, it is very hard to implement ethical guidelines into AI systems. As we humans usually tend to rely on gut feelings, I am not sure if we even would be capable of expressing morality and ethics in measurable metrics. Also, do we really know what morality is? Isn’t this also subjective? While there are things that could be morally right for us here in Western Europe, they might not be morally right in other countries. Therefore, I remain curious whether morality and ethics will in the future be explicitly programmed into AI systems. What do you think? Is it even necessary to program morality into AI systems?

 

References

[1]: Polonski, V. (2017). Can we teach morality to machines? Three perspectives on ethics for artificial intelligence. Retrieved from https://medium.com/@drpolonski/can-we-teach-morality-to-machines-three-perspectives-on-ethics-for-artificial-intelligence-64fe479e25d3

[2]: Hornigold, T. (2018). Building a Moral Machine: Who Decides the Ethics of Self-Driving Cars?. Retrieved from https://singularityhub.com/2018/10/31/can-we-program-ethics-into-self-driving-cars/

[3]: Nalini, B. (2019). The Hitchhiker’s Guide to AI Ethics. Retrieved from https://towardsdatascience.com/ethics-of-ai-a-comprehensive-primer-1bfd039124b0

[4]: Hao, K. (2018). Should a self-driving car kill the baby or the grandma? Depends on where you’re from. Retrieved from: https://www.technologyreview.com/s/612341/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *