Technological singularity: Why to be mindful of mankind’s last invention

16

September

2019

5/5 (3)

It is commonly believed that human technological progress has been happening at an ever-increasing rate and that, as a result, there will inevitably come a point where we reach an intelligence explosion which fully evades human control mechanisms. The truth, however, is that technological progress has been slowing down recently. Despite considerable strides in many scientific fields, most notably biochemistry and nanobiology (CRISPR, Guided Tissue Regeneration, Nanobots), machine learning (dueling neural networks) and space engineering (reusable shuttle engines), the rate of technological progress is actually decreasing, if judged only by the occurrence of significant technological landmarks such as the steam engine or the transistor, which have largely been missing from the technological landscape in past years, rendering our recent progress structure more incremental than revolutionary. A major exception to this, however, is machine learning: This technology is commonly noted as the next revolutionary general purpose technology, possibly making it the most remarkable innovation of the century. This blog post will serve the question whether or not we should be mindful of rapid progress in the machine learning domain and why it is so utterly difficult to conceive of a way to put bounds on the expansion of artificial intelligence.

It is important to understand that no matter how quickly we progress in our research and development of artificial intelligence, it is highly unlikely, virtually impossible, that we will cease to progress at all. There is no need for exponential growth to eventually reach singularity: all it needs is any growth at all, and, at least hypothetically, the eventual step to superintelligent AI seems inevitable. Based on the obvious statistics of continuous technological progress in the course of history, the only scenario that appears to be able to keep humanity at a total standstill for an indefinite amount of time is a natural catastrophy rendering us inable to make use of our resources, or even eradicating us all along. The only imaginable alternative to this disaster is that, sooner or later, we will have reached a point of optimising our machines to where they do not require human inputs anymore and are able to take charge of their own improvement, asking their own questions and answering them. Now this of course holds true for any technology, if we just expand the time horizon far enough into the future. However, there is reason to assume that the time for superintelligent AI might be coming sooner than many people think.

The Chinese board game “Go” looks much like chess visually, but comprises significantly more potential moves and strategies due to its large number of fields and pieces. By observing the average length of professional games and the number of legal positions on the board, the latter equalling 3^361, it can be said that the number of different possible Go game sequences corresponds to about 10^800, which is 10^720 for every atom in the known universe.

Company DeepMind Technologies, owned by Alphabet Inc., managed to create an artificial intelligence unit in 2015 called AlphaGo, which was the first of its kind to ever beat human world champion Lee Sedol without handicaps in a five-game match. At the time, this was a significant success, given the remarkable complexity of the game itself. Merely two years later, the same company published game records for their new AI AlphaGo Zero, which beat AlphaGo 100-0 in a hundred successive games, without ever having played against a human. AlphaGo Zero was one of the first AI’s ever built that got to the point of creating its own game strategies, which had never been seen performed by a human player, simply by playing against itself for 40 days. This makes AlphaGo Zero one of the first AI’s on the planet that has achieved non-human knowledge in a specific knowledge domain, effectively becoming its own entity within that domain. If we spin this idea further, it does not seem unlikely that soon we will build machines which possess knowledge that is increasingly difficult for us to comprehend or interpret. This only goes to show that, while we are still far away from creating intelligence that is capable of relating to emotions, understanding social cues, or even possessing something that can be compared to consciousness, we might underestimate how far we have already come.

Another radical step forward in the development of machine learning are dueling neural networks. These are a form of unsupervised deep learning algorithm, which, as the name suggests, train each other. It can be imagined as a generator and an interpreter continually feeding data to each other and improving both the generation as well as the interpretation side of the network. This is most easily demonstrated through a generative adversarial network (GAN), where one neural network creates pictures that resemble common visual elements, like celebrity faces, and another neural network has to decide whether the faces are real or fake. The idea of this adversarial system is to achieve faster, deeper learning and be able to create fakes that are so close to reality that they can be used to build simulations which previously had to be handwritten by developers. The concept of communicating neural networks could also be compared to the popular “wisdom of crowds”: By collaborating on cognitive challenges and tasks, humans have demonstrated the ability to eliminate the idiosyncratic noise associated with each individual judgment and generate solutions that are much higher in precision and often more fitting to a specific problem statement than any individual could on their own. This can be illustrated based on the popular example of a large number of people being asked the same guessing question (e.g. about the capacity of a building or vehicle) and, by averaging all answers, achieving the exact numerical answer. If humans can already become so much more intelligent through the combination of their cognitive efforts, what will neural networks be able to do under similar circumstances, communicating in higher and higher numbers? It is easy to see that no matter the pace of progress, AI is already becoming more intelligent than many would have imagined just a few years ago; and there is much more potential to be realized.

Now, why does all this mean that we might face a threat from artificial intelligence? Well, for one, humans have had a tendency in the course of history to underestimate the negative externalities of their technological innovations. Secondly, and more importantly, it is severely difficult to conceive of a scenario where we create an entity that possesses the ability to learn at a rate several orders of magnitude faster than human brains, without losing control over it. This seems especially likely when we quantify the inevitable superiority of their learning processes compared to ours: A biological neuron propagates slowly at no more than 100 meters per second even in highly elevated states, while in computers signals can travel at the speed of light. This makes electronic circuits function roughly a million times faster than biochemical ones, implying that a machine capable of improving itself without human directives could complete about 20.000 year’s worth of human intellectual work within a single week. While this may sound far-fetched, it is a simple mathematical truth that is based only on the assumption that we will eventually create superintelligent entities, which, as previously stated, is almost inevitable from a logical standpoint.

Now, what is often believed to be the worst-case scenario is that superintelligent entities will be spontaneously malevolent toward us, as if they were evil by their very nature. This is a caricature of the truth, if any, for what we should really be afraid of is that our machines will be so much more competent than us that even the slightest divergence between their goals and ours could lead to a serious conflict. Also, we have seen how humans believe that merely their intelligence characterizes them as superior to other creatures. It is not the sheer difference in physical size that makes us so indifferent to the wellbeing of houseflies, but it is actually our conception of them as beings less capable of perception and emotion. We translate our knowledge of their missing intellect into the assumption that they don’t experience and value life the same way we do, hence we should not be overly bothered by their existence. We systematically subordinate every creature that can not square with our level of deliberation and nothing has ever felt more natural. It is not that we are inherently evil, but that we do not possess empathy for creatures that have diverged too far from our intellectual standards as to consider them relatable. This disregard that seems to come naturally with a divergence of intelligence is the implied threat of the intelligence explosion toward the human kind.

The question becomes: What can we do to engineer a situation in which we can safely progress toward superintelligent AI without risking a loss of control? This is the logical conundrum of technological singularity. We cannot possibly think of an adequate failsafe for a technology that will be so cognitively powerful that within a few days it might have already learned so much that its way of deliberating and decision-making entirely escape our comprehension. A truly superintelligent AI would turn into a quasi god within such a short amount of time that the world might fundamentally change in unfathomable ways before word of the AI being deployed has even reached all continents. The speed at which this de quasi god could improve itself and change its environment (potentially through the quickly-expanding internet of things) cannot be put in words and escapes any method of simple quantification.

This brings about yet another difficulty: Even if humanity is lucky enough to have found a way to peacefully coexist with a superintelligent entity, it has to also find a way to do so in a manner that does not upset the established economical order in a hugely detrimental way. If, reasonably, our new quasi god is taking charge of all and any labour, if intellectual or physical, where will our place be in that new ecosystem? What will happen to employment, value creation, value exchange, and most of all, self-actualization in a world that does not require us to conduct intellectual work to get ahead? Would we ultimately be faced with a decision between destroying our new, all-powerful tool or submitting to its powers entirely? It seems as though before we can make any long-term use of such advanced AI, we will have to understand a lot more about the human condition first, which turns the challenge of achieving superintelligence into a challenge of finding our place in the world before we even deploy such superintelligence. We might be looking to find answers in the quasi god that will be the first superintelligent AI, but perhaps we will want to ask ourselves if we can live with the unpredictably broad spectrum of possible answers that it might give us about the world and ourselves.

The conclusion of this blog post is the same that Elon Musk, Sam Harris and Nick Bostrom all agree on: that there seems to be a significant imbalance between how fast we might be progressing to a highly competent AI (think within the next 50 to 100 years) and how quickly we could lose all control over it imminently after its genesis. Although still sounding a fantasy, scientists here and now should start putting their heads together to ensure we are at least moderately prepared for the range of possible outcomes of an intelligence explosion that we are no longer in charge of.

 

Sources:

Weinberger, M. (2019). Silicon Valley is stalling out as the pace of innovation slows down — and it could be a good thing for humanity. [online] Business Insider Nederland. Available at: https://www.businessinsider.nl/facebook-apple-microsoft-google-slowing-down-2018-6?international=true&r=US [Accessed 28 Sep. 2019].

Senseis.xmp.net. (2019). Number of Possible Go Games at Sensei’s Library. [online] Available at: https://senseis.xmp.net/?NumberOfPossibleGoGames [Accessed 28 Sep. 2019].

Deepmind. (2019). AlphaGo: The story so far. [online] Available at: https://deepmind.com/research/case-studies/alphago-the-story-so-far [Accessed 28 Sep. 2019].

Deepmind. (2019). AlphaGo Zero: Starting from scratch. [online] Available at: https://deepmind.com/blog/alphago-zero-learning-scratch/ [Accessed 28 Sep. 2019].

Lu, P., Morris, M., Brazell, S., Comiskey, C. and Xiao, Y. (2018). Using generative adversarial networks to improve deep-learning fault interpretation networks. The Leading Edge, 37(8), pp.578-583.

Investopedia. (2019). Wisdom of Crowds Definition. [online] Available at: https://www.investopedia.com/terms/w/wisdom-crowds.asp [Accessed 28 Sep. 2019].

YouTube. (2019). What happens when our computers get smarter than we are? | Nick Bostrom. [online] Available at: https://www.youtube.com/watch?v=MnT1xgZgkpk&t=756s [Accessed 28 Sep. 2019].

YouTube. (2019). Can we build AI without losing control over it? | Sam Harris. [online] Available at: https://www.youtube.com/watch?v=8nt3edWLgIg [Accessed 28 Sep. 2019].

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *