Can Morality Be Programmed Into AI Systems?

18

October

2019

No ratings yet.

For many years, experts have been warning about the unanticipated effects of general artificial intelligence (AI). For example, Elon Musk is of the opinion that AI may constitute a fundamental risk to the existence of human civilization, and Ray Kurzweil predicts that by 2029 AIs will be able to outsmart us human beings. [1]

Such scenarios have called for incorporating AI systems with a sense of ethics and morality. While general AI is still far away, morality in AI is already a widely discussed topic today (for example the trolley problem in autonomous cars). [2] [3]

So, where would we need to start in order to give machines a sense of ethics? According to Polonski, there are three ways to start designing more ethical machines [1]:

  1. Explicitly defining ethical behavior: AI researchers and ethicists should start formulating ethical values as quantifiable parameters and come up with explicit answers and decision rules to ethical dilemmas.
  2. Crowdsourcing human morality: Engineers should collect data on ethical measures by using ethical experiments (for example see http://moralmachine.mit.edu/) [4]. This data should then be used to train AI systems appropriately. Getting such data, however, might be challenging because ethical norms cannot always be standardized.
  3. Making AI systems more transparent: While we know that full algorithmic transparency is not feasible, there should be more transparency on how engineers quantified ethical values before programming them, as well as the outcomes that the AI has produced as a result of these choices. Here, guidelines implemented by policymakers could help.

However, in my opinion, it is very hard to implement ethical guidelines into AI systems. As we humans usually tend to rely on gut feelings, I am not sure if we even would be capable of expressing morality and ethics in measurable metrics. Also, do we really know what morality is? Isn’t this also subjective? While there are things that could be morally right for us here in Western Europe, they might not be morally right in other countries. Therefore, I remain curious whether morality and ethics will in the future be explicitly programmed into AI systems. What do you think? Is it even necessary to program morality into AI systems?

 

References

[1]: Polonski, V. (2017). Can we teach morality to machines? Three perspectives on ethics for artificial intelligence. Retrieved from https://medium.com/@drpolonski/can-we-teach-morality-to-machines-three-perspectives-on-ethics-for-artificial-intelligence-64fe479e25d3

[2]: Hornigold, T. (2018). Building a Moral Machine: Who Decides the Ethics of Self-Driving Cars?. Retrieved from https://singularityhub.com/2018/10/31/can-we-program-ethics-into-self-driving-cars/

[3]: Nalini, B. (2019). The Hitchhiker’s Guide to AI Ethics. Retrieved from https://towardsdatascience.com/ethics-of-ai-a-comprehensive-primer-1bfd039124b0

[4]: Hao, K. (2018). Should a self-driving car kill the baby or the grandma? Depends on where you’re from. Retrieved from: https://www.technologyreview.com/s/612341/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/

Please rate this

Cloud-Based Immortality

16

October

2019

5/5 (1)

Achieving immortality has long been a vision that seems to be out of reach. However, some futurists believe that by 2045 humans will achieve digital immortality by uploading their minds to computers [1]. Today, the startup Nectome works exactly on that. Nectome wants to upload and preserve people’s minds by preserving the brain using a revolutionary technique and then uploading the information to the cloud [2][3]. However, this comes with one certain twist – the brain has to be fresh, which means you have to be euthanized first [2]. Therefore, Nectome’s mind-uploading service is rather aimed at people with terminal illnesses. Also, Netcome says they don’t plan to attempt this feat in the foreseeable future [4].

The company Netcome has been part of the prestigious startup accelerator Y Combinator and has already raised $1 million in funding [2]. Moreover, it has found a way to test its market by inviting prospective customers to join a waiting list for $10,000, which they can get refunded if they change their mind. So far, 25 people have subscribed to the waiting list, including Sam Altman, one of the founders of Y Combinator [2]. But Netcome is not the only company working on mind uploading. In 2011, the 2045 Initiative was founded, which is an organization that wants to help humanity achieve immortality by 2045 by transferring someone’s personality into a new body [3].

Still, one big question arises: Is mind uploading even possible? Some people argue this should be possible in theory. If it would be possible to find a way to map the brain’s activity, scan the brain in detail and run gigantic simulations, it should be possible to recreate a person’s mind in a computer [3]. However, some people also argue that it is not possible [5].

Uploading our minds will definitely not be possible in the near future and it remains open whether it will be possible at all. However, it still is an interesting topic to debate about. Assuming mind uploading would work and we could obtain immortality by transferring our mind to another body, this would raise some ethical and philosophical questions, such as:

  • Should we allow to do it? Do we want it?
  • What do we do with “homeless” minds?
  • Can we lose our right to have a body? Or can we sell or rent it?
  • Should we attempt to reprogram brains?

I am curious about your opinion. Could you imagine such a thing becoming a reality? And what do you think would the implications of it be? What are aspects we would need to consider?

 

References

[1]: Lewis, T. (2013). The Singularity Is Near: Mind Uploading by 2045?. Retrieved from: https://www.livescience.com/37499-immortality-by-2045-conference.html

[2]: Regalado, A. (2018). A startup is pitching a mind-uploading service that is “100 percent fatal”. Retrieved from: https://www.technologyreview.com/s/610456/a-startup-is-pitching-a-mind-uploading-service-that-is-100-percent-fatal/

[3]: Van Hooijdonk, R. (2018). In a future of mind uploading, will you still be you, and who will own your mind?. Retrieved from: https://richardvanhooijdonk.com/blog/en/in-a-future-of-mind-uploading-will-you-still-be-you-and-who-will-own-your-mind/

[4]: Letzter, R. (2018). Brain-Uploading Company Has No Immediate Plans to Upload Brains. Retrieved from: https://www.livescience.com/62212-nectome-grant-mit-founder.html

[5]: Elderkin, B. (2018). Will We Ever Be Able to Upload a Mind to a New Body?. Retrieved from: https://gizmodo.com/will-we-ever-be-able-to-upload-a-mind-to-a-new-body-1822622161

Please rate this

Is Explainable AI (XAI) Needed?

14

October

2019

5/5 (2)

Imagine a self driving car knocks down and kills a pedestrian. Who is to blame and how can it be prevented in the future?

Such questions require artificial intelligence (AI) models to be interpretable. However, when an AI makes a decision, we are usually not able to understand how exactly the AI came to its decision and why it has chosen this specific decision (Schmelzer, 2019). This problem is also known as the black box of AI and is especially true of the most popular algorithms today (Schmelzer, 2019). Because of this decision-making problem of AI systems, a new field called Explainable AI (XAI) has emerged (Ditto, 2019). XAI aims at solving the black box problem by enabling humans to understand how the AI system came to a specific decision (Schmelzer, 2019). Specifically, it wants to explain how the parties who design and use the system are affected, how data sources and results are used and how inputs lead to outputs (Kizrak, 2019). However, there are different opinions regarding the necessity of XAI. So, what are the arguments for and against XAI?

One reason for the use of XAI is accountability. When it is known how and why the algorithm arrives at a specific decision, the algorithm becomes accountable for its actions and can be improved if it for example becomes too biased (van Rijmenam, 2019). A second reason is auditability, meaning that it is possible to test and refine the algorithms more accurately and prevent future failures (Ditto, 2019).

On the other hand, researchers and technology companies give XAI not much attention and rather focus on performance than interpretability (Kizrak, 2019). One reason against XAI is that it is too complex to explain. For example, the most popular AI models with good performance have around 100 million parameters (Paudyal, 2019). How could it be possible to explain such a model? This means that in order to make AIs explainable, one needs to consider the trade-off between performance and explainability. Furthermore, it can be argued that most of the thinking and decision-making of humans happen unconsciously (Paudyal, 2019). This would mean that we humans cannot explain all our decision either.

In my opinion, black box models cannot be avoided because we want to tackle problems that are very complex and require huge amounts of data, thereby analysing millions of parameters. While I do see some problems with current AI systems (e.g. ethics, legal accountability), I would still rather trust an AI system that statistically performs nearly faultless than an interpretable AI system with severe performance issues.

How do you think about explainable AI? Is there a need for it? Is it possible to implement XAI while not having the performance vs. interpretability trade-off?

 

References

Ditto. (2019). The death of black box AI: Why we need to focus on Explainable AI instead. Retrieved from https://www.ditto.ai/blog/black-box-ai-vs-explainable-ai

Kizrak, M., A. (2019). What Is Explainable Artificial Intelligence and Is It Needed?. Retrieved from https://interestingengineering.com/what-is-explainable-artificial-intelligence-and-is-it-needed

Paudyal, P. (2019). Should AI explain itself? or should we design Explainable AI so that it doesn’t have to. Retrieved from https://towardsdatascience.com/should-ai-explain-itself-or-should-we-design-explainable-ai-so-that-it-doesnt-have-to-90e75bb6089e

Schmelzer, R. (2019). Understanding Explainable AI. Retrieved from https://www.forbes.com/sites/cognitiveworld/2019/07/23/understanding-explainable-ai/#1872ea0c7c9e

Van Rijmenam, M. (2019). Algorithms are Black Boxes, That is Why We Need Explainable AI. Retrieved from https://medium.com/@markvanrijmenam/algorithms-are-black-boxes-that-is-why-we-need-explainable-ai-72e8f9ea5438

Please rate this

When AI decodes life’s biggest mysteries

12

October

2019

5/5 (1)

Go, a very strategic board game that originated over 3000 years ago in China, is known as one of the most challenging classical games for artificial intelligence (AI) (DeepMind, n.d.). Still, in 2015 an algorithm called AlphaGo was released by the company DeepMind and has since beat professional Go players and world champions on a consistent basis (DeepMind, n.d.a). Today, AlphaGo is considered an AI breakthrough and the strongest Go player in the world (DeepMind, n.d.; Meyer, 2017).

The company behind AlphaGo, DeepMind, is an AI R&D company that was acquired by Google in 2014 (DeepMind, n.d.b). While many might have heard of DeepMind and AlphaGo before, fewer people have heard of AlphaFold. AlphaFold is another algorithm developed by DeepMind that was presented to the public at the end of 2018 (DeepMind, n.d.c). Some say that AlphaFold is DeepMind’s biggest strike yet and could have a strong impact on our lives in the future (Zonnev, 2019). It is an algorithm that predicts a protein’s 3D structure based on its genetic sequence (DeepMind, n.d.c). So far, scientists have spent a lot of time understanding our DNA, but still struggle with a basic question in protein folding: how does a protein must be folded, so that it will work in the way it is supposed to (Zonnev, 2018)? If scientists could learn the process of protein folding better, they can find out exactly what a protein does and how it might cause harm (Sample, 2018). This could help in designing new proteins to fight diseases or help with other tasks, such as breaking down plastic in the environment (Sample, 2018).

DeepMind’s AlphaFold has participated in the CASP competition last year, which is a scientific competition where they test people’s ability to model proteins (Koonce, 2019). AlphaFold won first place and was able to correctly predict the folding of proteins 25 out of 43 times, while the second place only predicted 3 correct (Wiggers, 2018). Nevertheless, there are still some people who argue that AlphaFold’s algorithm is not as much of a breakthrough yet (see for example Al Quraishi, 2018; Service, 2018).

In conclusion, protein folding is a very complex topic and I need to understand it better first in order to grasp the implications of AlphaFold. Nonetheless, I personally find it very interesting when AIs enter the space of biology and could help us in solving some of life’s biggest mysteries that could significantly improve our lives in the future.

 

References

Al Quraishi, M. (2018). AlphaFold @ CASP13: “What just happened?”. Retrieved from https://moalquraishi.wordpress.com/2018/12/09/alphafold-casp13-what-just-happened/

DeepMind. (n.d.a). AlphaGo. Retrieved from https://deepmind.com/research/case-studies/alphago-the-story-so-far

DeepMind. (n.d.b). About. Retrieved from https://deepmind.com/about

DeepMind. (n.d.c). AlphaFold: Using AI for scientific discovery. Retrieved from https://deepmind.com/blog/article/alphafold

Koonce, B. (2019). An Introduction to AlphaFold and Protein Modeling. Retrieved from https://medium.com/quark-works/an-introduction-to-alphafold-and-protein-modeling-b83edadcff2b

Meyer, D. (2017). Google’s New AlphaGo Breakthrough Could Take Algorithms Where No Humans Have Gone. Retrieved from https://fortune.com/2017/10/19/google-alphago-zero-deepmind-artificial-intelligence/

Sample, I. (2018). Google’s DeepMind predicts 3D shapes of proteins. Retrieved from https://www.theguardian.com/science/2018/dec/02/google-deepminds-ai-program-alphafold-predicts-3d-shapes-of-proteins

Service, F., R. (2018). Google’s DeepMind aces protein folding. Retrieved from https://www.sciencemag.org/news/2018/12/google-s-deepmind-aces-protein-folding

Wiggers, K. (2018). Deepmind’s AlphaFold wins CASP13 protein-folding competition. Retrieved from https://venturebeat.com/2018/12/03/deepminds-alphafold-wins-casp13-protein-folding-competition/

Zonnev, C. (2019). How Google Is Decoding Nature’s Formula Of Life — Using AI — This is Their Biggest Strike Yet. Retrieved from https://towardsdatascience.com/https-medium-com-decoding-natures-formula-of-life-using-ai-this-is-google-deepmind-biggest-strike-yet-2da4a5992729

Please rate this