The one who knows more feels less: How rational thinking demystifies life and blinds us to the beauty of the unknown

12

October

2019

5/5 (1)

In 1979, a scientific paper published by Daniel Kahneman and Amos Tversky gave rise to the emergence of a previously untapped paradigm within economic theory: The idea of irrationality in decision making, otherwise known as Prospect Theory. It was this theory that, in turn, created the field of Behavioural Economics, now so broadly understood as a cornerstone of any notion of human decision-making.

In their research, the two authors discovered a range of cognitive biases, which seemed to systematically lead humans to making suboptimal decisions, going against the apparent logic of a given situation. These biases were soon established as a threat to economic productivity and profitability, and, ever since, there has been a noticeable movement within management science toward rationalizing decisions, based on reliable statistics and the most “objective” facts available.

This development is of course strongly reflected in our most recent technological advancements: With an unprecedented use of sophisticated computing, analytics, and artifical intelligence across a variety of industries and complex processes, it appears as though we were quickly and relentlessly moving to a world that is dominated by rational decision-makers in favour over the fallable human, who is simply too prone to the cognitive loopholes in his own thinking to compete with our intelligent machines. After all, anything digital revolves around learning and optimization, in an eternal loop of creating more and more accurate representations of the reality of a given situation.

But, as with every great invention, there are eventual downsides to our craze for optimization. There are, in fact, two tremendous curses that come with increased rationality and knowledge, and are often misunderstood or ignored.

The first of these is the curse of demystification. I call it that, because the increase in knowledge over anything simultaneously erodes part of our hopes and imaginations about that very thing. Sometimes the new knowledge in itself is gratifying enough to let us ignore the pain of losing our intellectual innocence, if you will, but oftentimes we can’t quite shut out the feeling of having lost this innocence, this protected state in which we could hypothesise and dream about the implications of the reality we experience; in which the experience, in fact, carried greater weight than the objective facts of reality.

Humans derive great pleasure from learning, but at the same time they derive pleasure from not knowing. An essential notion within romanticism is to feel and experience things that cannot quite be put in words, instead of meticulously trying to label and analyse them as we do now. I argue that the surge of rational thinking takes away our ability to enjoy the indescribable and intangible, and to thrive in an experience only through what we sensually perceive, instead of what we intellectually assert. This goes along with an inability to comfortably deal with situations we cannot control, as we have gotten used to understanding things so deeply that we can shield ourselves from uncertainty. This, however, as an ill-formed ambition that can only take away from our ability to stomach and deal with the unpredictability that is so characteristic to human life.

The second curse is the curse of self-alienation. Rational thinking, as much as it is admired by many, tends to exclude judgment and intuition from the decisions we make, hence creating a state in which we are much less open to suggestions from what is commonly referred to as “gut feeling”. As such, this gut feeling has been demonised through Prospect Theory, as it is considered to represent the enemy of optimal decisions. The terrible side effect of this demonisation is that we have started to systematically suppress intuitive thinking, believing that it can only misguide us and not possibly yield any benefits for our decisions.

In the absence of intuition, we forget what it feels like to listen to our inner voice. As statistics and objective facts are purely external, and intuition is purely internal, we have come to understand that whatever decision-factors originate from within ourselves cannot be trusted and are strictly inferior to the “facts”. Intuition is emotionally charged, as it represents the part of a decision-making process that respects and takes into account our current needs and desires. This should be considered a valuable self-preservation mechanism more than a threat to good decisions, as our psychological wellbeing is directly tied to respecting our inner needs in the decisions we make.

In the above-described way, suppressing intuition in our thinking allows us to build a dangerous mistrust toward our inner world of feelings, as we are told that it cannot serve us to create an accurate idea of the outer world which we associate with and want to find a place in. Simultaneously, a loss of intuition creates a disconnection between our concsious decision-making apparatus and our subconscious, but highly valuable expression of emotions, needs and desires. The more we disconnect from our inner self and learn to rely on facts only, the more we tend to disrespect our spiritual balance, and the more likely we are to develop health issues and self-contempt over how badly we treat ourselves.

It is worth thinking about how far to tip the scale in terms of rationality, as a certain extent of spontaneous, intuitive and impulsive thinking can be an anchor for us to not lose touch with ourselves, and to be able to enjoy life even in the absence of  certainty. Hence, harnessing demystification and self-alienation should be a priority in the process of technological innovation.

Please rate this

One algorithm to (mis-)teach us the world

3

October

2019

5/5 (1)

It is widely known that Google has become somewhat of a monopoly in the domain of online search engines, far outreaching Bing or Mozilla Firefox in both popularity and the user’s perceived utility. Not rarely do people argue that only by using Google they can trust the answers they find through a search query. This, however, couldn’t be further from the truth.

But first, let’s talk about how Google came this far. Google is an example of a quasi-monopolistic platform provider that has made clever use of a variety of tactics to sustain its proprietary control in a multi-sided platform market (Rochet et al., 2003). Most of all, however, Google has profited from engulfing other services: The acquisition of YouTube, the world’s biggest video platform, and Android, currently running on 80 percent of the world’s smartphones, are only two of the countless examples of successful complementary platform envelopment (Croft, 2019; Eisenmann et al., 2011). Google has started out enabling users to navigate to such other platforms, but due to its tight control over search results in the internet and controlling over 60 percent of the world’s advertising revenue, platforms like YouTube and Android stood no chance to fight off the absorption into what was practically the house they lived in, despite their already tremendous size (Croft, 2019).

Google’s success in enveloping other platforms crucially originates from the brand image it has constructed for itself. Being a platform that centrally navigates consumers to other websites and web-applications, Google has cleverly positioned itself in a place where users tend to forget that there are even alternatives (Croft, 2019). Over accessing the world wide web numerous times, and using Google specifically in order to get where we want to be in the internet, it has become difficult for us to separate the concept internet from the concept Google: They have become one entity in the minds of consumers, as we seem to have forgotten that a GPS is not the same as the infrastructure it guides us through. Being perceived as the gateway to the internet (or even the internet itself) rather than just a search engine has helped Google tremendously in reaching the monopoly status it now possesses (Hagiu, 2009).

Now, to get to the core of what this blog post is about, why is there an issue to how tightly Google controls the flow of information in the internet? Well, there is an obvious one: We naturally don’t like the idea of depending on one conglomerate with our needs and desires, as we like to have options and hence be able to exert pressure on the companies to serve us better. But this is not even close to the long-term threat that comes from depending on Google for our every online search. The real problem is that there is a significant moral bias associated with every search query we enter, and that the information we obtain from Google, for as long as we are not talking only about isolated facts, is anything but helpful in creating knowledge free of bias (TED Talks, 2013).

The way a search engine works is, simply put, through discrimination. Usually, a program called a spider pre-sorts through endless numbers of webpages by following a web of hyperlinks from one page to the next. This spider collects specific information from many of those pages and feeds it into a search index, from where those sites will be pulled that the search algorithm determines are going to provide most utility to the individual user. The specifics of this algorithm are different from one search engine to another, but often work according to the frequency, order and array of the search terms on the websites in the search index (Code.org, 2017). The algorithm learns about the user’s preferences over time, being able to simplify the selection of suitable search results through relevance filters. These filters effectively narrow down the selection of possible results to such a small amount that, over time, the search results presented to the user homogenize: Once the algorithm has learned enough about us, it can be seen marginalizing the full volume of potential knowledge down to personalized bite-sized bits that add almost no value to what should be our goal to extend our knowledge horizon (TED Talks, 2013).

Seeing as how Google progressively homogenizes the information we consume, practically allowing the user to enter a tunnel of similar content and mainstream their information search, one could say that the search algorithms employed by the most powerful platform in the world blind us to the complexity of reality (Evans, 2003; Choudary, 2019). The more we engage in efforts to become smarter and better informed, the more our decisions lead us to getting stuck in one internally homogenous knowledge domain that does not allow for great leaps into other paradigms or perspectives. Add this to the fact that Google’s search algorithm is probably the biggest monopoly apparent in the modern economy, and there should be a growing skepticism about how heavily we rely on it to develop ourselves and our society ongoingly. Perhaps it is time to intensify the current deliberations in the US congress about anti-monopolistic interventions in the search engine domain to allow for a diversification of the information that leads our lives, which, besides the concerns about predatory competition, is quite certainly the biggest threat emitting from Google’s power position in the long term.

 

Sources:

Rochet, J.C. and Tirole, J., 2003. Platform competition in two-sided markets. Journal of the european economic association1(4), pp.990-1029.

Eisenmann, T., Parker, G. and Van Alstyne, M., 2011. Platform envelopment. Strategic Management Journal32(12), pp.1270-1285.

Evans, D.S., 2003. The antitrust economics of multi-sided platform markets. Yale J. on Reg.20, p.325.

Hagiu, A., 2009. Multi-sided platforms: From microfoundations to design and expansion strategies. Harvard Business School Strategy Unit Working Paper, (09-115).

Croft, S. (2019). How did Google get so big?. [online] Cbsnews.com. Available at: https://www.cbsnews.com/news/how-did-google-get-so-big-60-minutes/ [Accessed 3 Oct. 2019].

Choudary, S. (2019). The Dangers of Platform Monopolies. [online] INSEAD Knowledge. Available at: https://knowledge.insead.edu/blog/insead-blog/the-dangers-of-platform-monopolies-6031 [Accessed 3 Oct. 2019].

Code.org (2017). The Internet: How Search Works. [online] Available at: https://www.youtube.com/watch?v=LVV_93mBfSU [Accessed 3 Oct. 2019].

TED Talks (2013). What FACEBOOK And GOOGLE Are Hiding From The World – The Filter Bubble. [online] Available at: https://www.youtube.com/watch?v=p6vM4dhI9I8 [Accessed 3 Oct. 2019].

Please rate this

Technological singularity: Why to be mindful of mankind’s last invention

16

September

2019

5/5 (3)

It is commonly believed that human technological progress has been happening at an ever-increasing rate and that, as a result, there will inevitably come a point where we reach an intelligence explosion which fully evades human control mechanisms. The truth, however, is that technological progress has been slowing down recently. Despite considerable strides in many scientific fields, most notably biochemistry and nanobiology (CRISPR, Guided Tissue Regeneration, Nanobots), machine learning (dueling neural networks) and space engineering (reusable shuttle engines), the rate of technological progress is actually decreasing, if judged only by the occurrence of significant technological landmarks such as the steam engine or the transistor, which have largely been missing from the technological landscape in past years, rendering our recent progress structure more incremental than revolutionary. A major exception to this, however, is machine learning: This technology is commonly noted as the next revolutionary general purpose technology, possibly making it the most remarkable innovation of the century. This blog post will serve the question whether or not we should be mindful of rapid progress in the machine learning domain and why it is so utterly difficult to conceive of a way to put bounds on the expansion of artificial intelligence.

It is important to understand that no matter how quickly we progress in our research and development of artificial intelligence, it is highly unlikely, virtually impossible, that we will cease to progress at all. There is no need for exponential growth to eventually reach singularity: all it needs is any growth at all, and, at least hypothetically, the eventual step to superintelligent AI seems inevitable. Based on the obvious statistics of continuous technological progress in the course of history, the only scenario that appears to be able to keep humanity at a total standstill for an indefinite amount of time is a natural catastrophy rendering us inable to make use of our resources, or even eradicating us all along. The only imaginable alternative to this disaster is that, sooner or later, we will have reached a point of optimising our machines to where they do not require human inputs anymore and are able to take charge of their own improvement, asking their own questions and answering them. Now this of course holds true for any technology, if we just expand the time horizon far enough into the future. However, there is reason to assume that the time for superintelligent AI might be coming sooner than many people think.

The Chinese board game “Go” looks much like chess visually, but comprises significantly more potential moves and strategies due to its large number of fields and pieces. By observing the average length of professional games and the number of legal positions on the board, the latter equalling 3^361, it can be said that the number of different possible Go game sequences corresponds to about 10^800, which is 10^720 for every atom in the known universe.

Company DeepMind Technologies, owned by Alphabet Inc., managed to create an artificial intelligence unit in 2015 called AlphaGo, which was the first of its kind to ever beat human world champion Lee Sedol without handicaps in a five-game match. At the time, this was a significant success, given the remarkable complexity of the game itself. Merely two years later, the same company published game records for their new AI AlphaGo Zero, which beat AlphaGo 100-0 in a hundred successive games, without ever having played against a human. AlphaGo Zero was one of the first AI’s ever built that got to the point of creating its own game strategies, which had never been seen performed by a human player, simply by playing against itself for 40 days. This makes AlphaGo Zero one of the first AI’s on the planet that has achieved non-human knowledge in a specific knowledge domain, effectively becoming its own entity within that domain. If we spin this idea further, it does not seem unlikely that soon we will build machines which possess knowledge that is increasingly difficult for us to comprehend or interpret. This only goes to show that, while we are still far away from creating intelligence that is capable of relating to emotions, understanding social cues, or even possessing something that can be compared to consciousness, we might underestimate how far we have already come.

Another radical step forward in the development of machine learning are dueling neural networks. These are a form of unsupervised deep learning algorithm, which, as the name suggests, train each other. It can be imagined as a generator and an interpreter continually feeding data to each other and improving both the generation as well as the interpretation side of the network. This is most easily demonstrated through a generative adversarial network (GAN), where one neural network creates pictures that resemble common visual elements, like celebrity faces, and another neural network has to decide whether the faces are real or fake. The idea of this adversarial system is to achieve faster, deeper learning and be able to create fakes that are so close to reality that they can be used to build simulations which previously had to be handwritten by developers. The concept of communicating neural networks could also be compared to the popular “wisdom of crowds”: By collaborating on cognitive challenges and tasks, humans have demonstrated the ability to eliminate the idiosyncratic noise associated with each individual judgment and generate solutions that are much higher in precision and often more fitting to a specific problem statement than any individual could on their own. This can be illustrated based on the popular example of a large number of people being asked the same guessing question (e.g. about the capacity of a building or vehicle) and, by averaging all answers, achieving the exact numerical answer. If humans can already become so much more intelligent through the combination of their cognitive efforts, what will neural networks be able to do under similar circumstances, communicating in higher and higher numbers? It is easy to see that no matter the pace of progress, AI is already becoming more intelligent than many would have imagined just a few years ago; and there is much more potential to be realized.

Now, why does all this mean that we might face a threat from artificial intelligence? Well, for one, humans have had a tendency in the course of history to underestimate the negative externalities of their technological innovations. Secondly, and more importantly, it is severely difficult to conceive of a scenario where we create an entity that possesses the ability to learn at a rate several orders of magnitude faster than human brains, without losing control over it. This seems especially likely when we quantify the inevitable superiority of their learning processes compared to ours: A biological neuron propagates slowly at no more than 100 meters per second even in highly elevated states, while in computers signals can travel at the speed of light. This makes electronic circuits function roughly a million times faster than biochemical ones, implying that a machine capable of improving itself without human directives could complete about 20.000 year’s worth of human intellectual work within a single week. While this may sound far-fetched, it is a simple mathematical truth that is based only on the assumption that we will eventually create superintelligent entities, which, as previously stated, is almost inevitable from a logical standpoint.

Now, what is often believed to be the worst-case scenario is that superintelligent entities will be spontaneously malevolent toward us, as if they were evil by their very nature. This is a caricature of the truth, if any, for what we should really be afraid of is that our machines will be so much more competent than us that even the slightest divergence between their goals and ours could lead to a serious conflict. Also, we have seen how humans believe that merely their intelligence characterizes them as superior to other creatures. It is not the sheer difference in physical size that makes us so indifferent to the wellbeing of houseflies, but it is actually our conception of them as beings less capable of perception and emotion. We translate our knowledge of their missing intellect into the assumption that they don’t experience and value life the same way we do, hence we should not be overly bothered by their existence. We systematically subordinate every creature that can not square with our level of deliberation and nothing has ever felt more natural. It is not that we are inherently evil, but that we do not possess empathy for creatures that have diverged too far from our intellectual standards as to consider them relatable. This disregard that seems to come naturally with a divergence of intelligence is the implied threat of the intelligence explosion toward the human kind.

The question becomes: What can we do to engineer a situation in which we can safely progress toward superintelligent AI without risking a loss of control? This is the logical conundrum of technological singularity. We cannot possibly think of an adequate failsafe for a technology that will be so cognitively powerful that within a few days it might have already learned so much that its way of deliberating and decision-making entirely escape our comprehension. A truly superintelligent AI would turn into a quasi god within such a short amount of time that the world might fundamentally change in unfathomable ways before word of the AI being deployed has even reached all continents. The speed at which this de quasi god could improve itself and change its environment (potentially through the quickly-expanding internet of things) cannot be put in words and escapes any method of simple quantification.

This brings about yet another difficulty: Even if humanity is lucky enough to have found a way to peacefully coexist with a superintelligent entity, it has to also find a way to do so in a manner that does not upset the established economical order in a hugely detrimental way. If, reasonably, our new quasi god is taking charge of all and any labour, if intellectual or physical, where will our place be in that new ecosystem? What will happen to employment, value creation, value exchange, and most of all, self-actualization in a world that does not require us to conduct intellectual work to get ahead? Would we ultimately be faced with a decision between destroying our new, all-powerful tool or submitting to its powers entirely? It seems as though before we can make any long-term use of such advanced AI, we will have to understand a lot more about the human condition first, which turns the challenge of achieving superintelligence into a challenge of finding our place in the world before we even deploy such superintelligence. We might be looking to find answers in the quasi god that will be the first superintelligent AI, but perhaps we will want to ask ourselves if we can live with the unpredictably broad spectrum of possible answers that it might give us about the world and ourselves.

The conclusion of this blog post is the same that Elon Musk, Sam Harris and Nick Bostrom all agree on: that there seems to be a significant imbalance between how fast we might be progressing to a highly competent AI (think within the next 50 to 100 years) and how quickly we could lose all control over it imminently after its genesis. Although still sounding a fantasy, scientists here and now should start putting their heads together to ensure we are at least moderately prepared for the range of possible outcomes of an intelligence explosion that we are no longer in charge of.

 

Sources:

Weinberger, M. (2019). Silicon Valley is stalling out as the pace of innovation slows down — and it could be a good thing for humanity. [online] Business Insider Nederland. Available at: https://www.businessinsider.nl/facebook-apple-microsoft-google-slowing-down-2018-6?international=true&r=US [Accessed 28 Sep. 2019].

Senseis.xmp.net. (2019). Number of Possible Go Games at Sensei’s Library. [online] Available at: https://senseis.xmp.net/?NumberOfPossibleGoGames [Accessed 28 Sep. 2019].

Deepmind. (2019). AlphaGo: The story so far. [online] Available at: https://deepmind.com/research/case-studies/alphago-the-story-so-far [Accessed 28 Sep. 2019].

Deepmind. (2019). AlphaGo Zero: Starting from scratch. [online] Available at: https://deepmind.com/blog/alphago-zero-learning-scratch/ [Accessed 28 Sep. 2019].

Lu, P., Morris, M., Brazell, S., Comiskey, C. and Xiao, Y. (2018). Using generative adversarial networks to improve deep-learning fault interpretation networks. The Leading Edge, 37(8), pp.578-583.

Investopedia. (2019). Wisdom of Crowds Definition. [online] Available at: https://www.investopedia.com/terms/w/wisdom-crowds.asp [Accessed 28 Sep. 2019].

YouTube. (2019). What happens when our computers get smarter than we are? | Nick Bostrom. [online] Available at: https://www.youtube.com/watch?v=MnT1xgZgkpk&t=756s [Accessed 28 Sep. 2019].

YouTube. (2019). Can we build AI without losing control over it? | Sam Harris. [online] Available at: https://www.youtube.com/watch?v=8nt3edWLgIg [Accessed 28 Sep. 2019].

Please rate this