Artificial Intelligence and the future of the human condition

16

September

2021

No ratings yet.

The rise and development of artificial intelligence (hereafter AI) has been observed with both excitement and anxiety. There appears to be a strong divide between the group of hardcore proponents for AI and the group consisting of those who are seriously worried about developing forms of AI that have the capacity to eradicate humanity. This blog post takes the latter stance and aims to shine some light on the inherent dangers of and our misguided intuitions towards AI.

Let us first start by outlining a major concern regarding AI: One day we will build intelligent machines that are smarter than us humans (Bostrom, 2017). There is a number of objections that are typically raised when one supports the aforementioned statement. Some say that this is unlikely to happen yet this claim is patently – and dangerously – false. There are only three assumptions one needs to make to arrive at the conclusion that in the future there will be superhuman forms of AI (Harris, 2016).

  1. Intelligence occurs as a result of information-processing in a physical system (e.g. in a computer);
  2. The improvements we make to our machines will continue, for example because the incentive to create better machines is extraordinary (e.g. consider how much more value can be extracted out of data sets with current computers compared to computers ten years ago; the difference is total);
  3. Humanity has not yet reached the peak of intelligence (e.g. there are problems that are in desperate need of being solved, such as climate change or diseases).

Unless one can find a problem with either of these statements, the inevitable conclusion is that there will come a day when our intelligent machines will outsmart us. Importantly, no serious scientists have been identified who disagree with one or more of the assumptions (Harris, 2016). So what? Such objections attest to an alarming degree of naivety and a failure to seriously contemplate the possible outcomes.

A variety of problems have been anticipated in regard to this scenario (for a tough yet complete read on these problems, consider picking up Nick Bostrum’s book named Superintelligence). For example, there is the well-known alignment problem, in which the goals, values, and motivations of the super intelligent machine are not aligned with those of us earthlings (Bostrum, 2017). Consider giving the AI a task like ‘’Solve the climate change problem’’. How can we know that the AI will not decide to destroy humanity, for our species heavily contributes to this problem?

Common sense cannot be assumed; a sense of right of wrong cannot be assumed; blindly expecting alignment between the AI’s strategy and the best and highest interests’ of humanity cannot be assumed (Harris, 2016). How will we program human values into the AI? And what kind of values, exactly? Can we believe benign motivations, goals, and values will be installed into the AI when a foreign power like Russia or Afghanistan would program these values?

These are all deep questions and the answers are not straightforward nor exhaustive. It is time for us to wake up and open our eyes to both the avoidable dangers as well as the potentialities of such God-like machines.

References

Bostrom, N. (2017). Superintelligence : paths, dangers, strategies. Oxford: Oxford University Press.

‌Harris, S. (2016). Can we build AI without losing control over it? | Sam Harris. YouTube. Available at: https://www.youtube.com/watch?v=8nt3edWLgIg [Accessed 16 Sep. 2021].

Please rate this

1 thought on “Artificial Intelligence and the future of the human condition”

  1. Very interesting topic. It really appeals to me and I actually also did my thesis last year about AI and its impact on decision making. Like you said, on the one hand I find it very exciting what this technology can do and what we can do with it, but on the other hand it also scares me a little to think it is possible that it would outsmart us at some point.

    I think that AI has helped us in doing our work more efficiently, since fewer people are now needed due to bots and virtual assistants. Also, the research I conducted resulted in important insights in how AI has been able to take over heavy work loads for us (such as machineries that work automatically), and can also assist us to make faster decisions.

    Obviously, AI has a lot more to offer in the future and will evolve in even more bizarre ways that we can now imagine. I agree that “humanity” needs to draw a line here to where we say it is enough. However, us humans have the tendency to never be completely satisfied, so that we will always look for improvement and ways that will make things easier and better for us. Because of this, I think that even when some of us are aware of AI becoming a threat, we might be too late or even neglect the risk, as we are blinded by the successes it has caused. Also, I feel like we are intertwined in a world were AI is becoming the norm.

    So, not to be pessimistic or anything, but I certainly do not hope that we wait until it is too late.

Leave a Reply

Your email address will not be published. Required fields are marked *