Artificial Intelligence and the future of the human condition

16

September

2021

No ratings yet.

The rise and development of artificial intelligence (hereafter AI) has been observed with both excitement and anxiety. There appears to be a strong divide between the group of hardcore proponents for AI and the group consisting of those who are seriously worried about developing forms of AI that have the capacity to eradicate humanity. This blog post takes the latter stance and aims to shine some light on the inherent dangers of and our misguided intuitions towards AI.

Let us first start by outlining a major concern regarding AI: One day we will build intelligent machines that are smarter than us humans (Bostrom, 2017). There is a number of objections that are typically raised when one supports the aforementioned statement. Some say that this is unlikely to happen yet this claim is patently – and dangerously – false. There are only three assumptions one needs to make to arrive at the conclusion that in the future there will be superhuman forms of AI (Harris, 2016).

  1. Intelligence occurs as a result of information-processing in a physical system (e.g. in a computer);
  2. The improvements we make to our machines will continue, for example because the incentive to create better machines is extraordinary (e.g. consider how much more value can be extracted out of data sets with current computers compared to computers ten years ago; the difference is total);
  3. Humanity has not yet reached the peak of intelligence (e.g. there are problems that are in desperate need of being solved, such as climate change or diseases).

Unless one can find a problem with either of these statements, the inevitable conclusion is that there will come a day when our intelligent machines will outsmart us. Importantly, no serious scientists have been identified who disagree with one or more of the assumptions (Harris, 2016). So what? Such objections attest to an alarming degree of naivety and a failure to seriously contemplate the possible outcomes.

A variety of problems have been anticipated in regard to this scenario (for a tough yet complete read on these problems, consider picking up Nick Bostrum’s book named Superintelligence). For example, there is the well-known alignment problem, in which the goals, values, and motivations of the super intelligent machine are not aligned with those of us earthlings (Bostrum, 2017). Consider giving the AI a task like ‘’Solve the climate change problem’’. How can we know that the AI will not decide to destroy humanity, for our species heavily contributes to this problem?

Common sense cannot be assumed; a sense of right of wrong cannot be assumed; blindly expecting alignment between the AI’s strategy and the best and highest interests’ of humanity cannot be assumed (Harris, 2016). How will we program human values into the AI? And what kind of values, exactly? Can we believe benign motivations, goals, and values will be installed into the AI when a foreign power like Russia or Afghanistan would program these values?

These are all deep questions and the answers are not straightforward nor exhaustive. It is time for us to wake up and open our eyes to both the avoidable dangers as well as the potentialities of such God-like machines.

References

Bostrom, N. (2017). Superintelligence : paths, dangers, strategies. Oxford: Oxford University Press.

‌Harris, S. (2016). Can we build AI without losing control over it? | Sam Harris. YouTube. Available at: https://www.youtube.com/watch?v=8nt3edWLgIg [Accessed 16 Sep. 2021].

Please rate this

Artificial Intelligence: How I Learned to Stop Worrying and Love Skynet

12

September

2017

4.5/5 (4) What is the first thing we think of when we hear the phrase ‘Artificial Intelligence’ (AI)? Mechanical monsters bent on exterminating humanity, as the film industry teaches us with sci-fi movies such as Terminator and Skynet?

During the lecture on the eleventh of September, 2017, the professor asked whom had ever made use of AI. Siri was held up as an example of an AI, and that is correct – but the many, many hands that weren’t raised, were probably incorrect; does anyone not use Google’s search engine on a daily base, for instance? Artificial Intelligence is everywhere, from the spam filters on our emails to the cars we drive in daily.

AI, then, is not hardware, but software; a brain of software, using hardware as its medium, capable of connecting data through mathematical reasoning to reach new insights and conclusions.

So far, these are by and large background processes, largely invisible to the uncritical eye. This is called ‘weak AI’, or ‘Artificial Narrow Intelligence’ (ANI). Siri is an example of this, as are the various game (chess, Go) champions, the many Google products – its translator, its search engine, the spam filters on its email provider – the whole process of applications or websites recommending products, videos, friends, or songs, self-driving cars, and ever so on. As should be clear, this kind of AI is not weak in that it can barely achieve anything – one would hardly call self-driving cars simple products, child’s play to create – but it is narrow, in that this AI can only excel at a very narrowly defined task. Hence, the term Artificial Narrow Intelligence; ANI.

ANI is practically everywhere these days. But if there is a narrow AI, then surely there is a more general AI as well? Indeed; Artificial General Intelligence, AGI, also known as strong AI. A human-like AI, as smart as the average human on all tasks – not just one narrowly defined one – capable of preforming any intellectual task a human can. This AI does not exist yet – unless, of course, you feel that now would be an excellent time to expose yourself as the Terminator, dear reader?

There are two problems that we still have to tackle in order to create AGI. One concerns computational power. In this, according to the TOP500, a Chinese supercomputer, the Sunway TaihuLight, currently takes the lead. It overtook the Chinese Tianhe-2 in June of 2016 (TOP500 compiles its list every June and every November), and as of June 2017, it still claims the number one spot. It can preform 93 quadrillion floating point operations per second (petaflops), which is about thrice as much as the Tianhe-2 (33.86 petaflops). Is it more than the human brain? A whole variety of scientists rank the human brain as ranging anywhere from 10^13.5 to 10^11, but then there are also scientists that rank the human brain as an order of magnitude higher, or outright dismiss the comparison. For further reading, https://www.quora.com/How-many-FLOPS-is-the-human-brain – including the comments – might be a nice place to start, but Google is full of many wildly differing claims.

It hardly matters, for now, though. Even if a supercomputer exists that is better than the human brain, it would only be better in the amount of floating point calculations it could preform – but at what cost? The human brain requires 20 watt – the energy of a light bulb – and 115 square centimetres, which more or less fits in your hand. Green500, which ranks TOP500’s supercomputers based on their energy efficiency, gives the Sunway TaihuLight fourth place; it requires one watt for every 6,051.30 floating point operations. Times 20, that makes it able to preform 121,026 floating point operations on the same wattage the human brain runs on. That is quite a bit short from 93 quadrillion. Further, whereas the human brain fits in a handpalm, the Sunway TaihuLight fits comfortably in a room the size of a thousand square metres. Not quite the hardware the Terminator ran around with.

The second problem with reaching AGI, aside from computational power, is intelligence. There are roughly three methods that are currently attempted. We could simply copy our brain. Literally, with a 3D printer, or slightly less literally, by setting up a network of neurons that would randomly fire and not achieve much at all in the beginning. But the moment it does achieve something, such as correctly guessing that a certain picture is a muffin and not a puppy, we can reinforce this path, making it likelier to use this path in the future and therefore making it likelier to be correct in the future. With this, the one millimetre long brain of a flatworm, consisting of 302 neurons, was emulated and put into a LEGO body (because we’re all children at heart) in 2014. For comparison, the human brain consists of 100 billion neurons – but even so, as our progress increases exponentially, some have eyeballed this method to achieve success somewhere around 2030 to 2050.

A second method is to copy evolution, instead of copying the brain. The large downside of this is that evolution had billions of years to play around with us. The upside is that we are not evolution; we are not driven by random chance, and we have the actual goal of creating intelligence. Evolution might well select against intelligence, because a more intelligent brain requires more energy, that might be better used for other abilities, such as warmth. Fortunately, unlike evolution, we can directly give energy, which might be highly inefficient, but we can improve that over time.

The third method is to let our budding AI figure it out for us, by researching AI and changing its own code and architecture based on its findings. Unsupervised machine learning – but of course, we can shut it off before it quickly becomes more capable at anything the human brain can do and becomes a superintelligence, right?

Right?

You just know the answer is ‘no’ when someone poses the question.

Next to Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI), there is a third variant; Artificial Superintelligence (ASI). Most agree that a self-improving AI is the likeliest to go from ANI to AGI – and if it isn’t, there is still no reason to assume that no self-improving AI will ever come into existence. If a self-improving AI reaches general intelligence – that is, if it becomes just as capable as humans are…

Then it is inherently more capable already. It is self-improving, after all, constantly applying upgrades and fixes to improve itself even more. It has microprocessors, some of which today run at 3.6 GHz, whereas our own neurons run at a measly 200 Hz. It communicates at the speed of light, 299,792,458 metres per second, whereas our brain does so at a speed of 120 metres per second. Its physical size is scalable, allowing it to increase its memory (RAM and HDD; short-term and long-term memory), whereas our brain is confined by our skull – but we couldn’t really expand our brain anyway, because our 120 metres per second would be too slow to keep up.

There are many more factors to consider. Humans cannot work literally every second of their lives, as humans tire out and lose their focus. Humans are prone to making mistakes, doubly so when tired or unfocused; to err is human, after all. Humans cannot easily work in a variety of circumstances, such as radioactive places, deserts, or the arctic.

Then there is humanity’s biggest advantage, that has enabled humanity to take the world by storm and reshape it into our image, with towering blocks of concrete and signs of human habitation everywhere. Humans developed language, and lived in communities – tribes – allowing not individual humans, but all of humanity, to learn. Fathers would remember the lessons of their own fathers, and teach them to their peers, their sons, and his peers, and so the collective knowledge of humanity increased gradually. Writing, and alphabets, were developed to better store this knowledge. Stored in libraries, written by hand, until the printing press was invented – it is no wonder that the printing press spawned a wave of revolutions and ideas, now that every single citizen could read books and form their own opinions. Universities taught all that humanity knew, and with every generation, this collective knowledge grew exponentially. Now we have the internet and computers, allowing for unprecedented communication and storage, and if you compare the last 50 years to the 50 years before, and to the 50 years before that, and so on, it becomes quite apparent how rapidly our knowledge has grown recently (for more reading, search for Ray Kurzweil’s ‘Law of Accelerating Return’).

But we are humans, individuals still. We are not a worldwide network of interconnected computers. We have an advantage that allows us to utterly dominate the entirety of Earth – and AIs will have this advantage ten times over.

This begs the question: What is AGI supposed to be anyway? Because it is a human-centric measurement, but humans are not special; the average IQ of a human is not a universal constant, it is just another point on a graph of the average IQ of species. For an AI, it is just the next number it has to pass, after passing worms and monkeys and ravens and more. And it is a point exceedingly close to both a mentally challenged individual and Einstein, despite the vast difference we would perceive between the two.

There are three broad views that deal with the transformation from AGI to ASI. The arguments for and against these views are more philosophic than what is presented in the above paragraphs, which is why I will include an article that expands upon each of the three views for further reading, should you desire to do so.

Proponents of the soft take-off view say that the concept of self-improvement already exists; in large companies. Intel has tens of thousands of people and millions of CPUs, all of which are used to become even better at developing CPUs; it is highly specialised in this task. So, too, with an AGI; this AI will participate in our economy (which knows exponential growth as well) because it can specialise itself in something to earn money with. Money which will be used to buy the necessary means for self-improvement. This specialisation – letting others develop things for you, that you can then buy with the money you earned from your own specialised skillset – is historically speaking a great advantage of humanity, and will be more advantageous than doing everything on its own for this AI as well. The AI would have to compete with almost all of humanity and all of humanity’s resources, to be more efficient in every single thing than a any specialised human is – and an AGI, a human-level AI, is not so vastly more efficient than all of humanity together.

There is a scientific foundation for this, such as what is written in the book ‘Artificial General Intelligence 2008: Proceedings of the First AGI Conference’ (very shortly; approximately 1% of the citizens of the USA are scientists or engineers, approximately 1% of that amount devotes itself to cognitive science, we can estimate cognitive science to improve its models at a rate equal to Moore’s Law, from this we can estimate humanity’s ability at intelligence improvement, and so we can calculate that, for now, it makes sense for an AGI to work and interact with us to improve itself), but if you simply wish to read one of the many articles that deal with the subject: http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html

A second view is that of the semi-hard take-off scenario, comparable to what is often seen in movies; an AI connects to the internet and rapidly reads and learns everything present there. Perhaps you have once heard of the ‘paperclip maximiser’ scenario ( https://nickbostrom.com/ethics/ai.html ); an AI is made with an explicit purpose (produce paperclips, in this case), and as it tries to do this as efficiently as possible, it transforms the entire world – and beyond – into a paperclip factory, including all humans. That makes sense; the AI’s only goal was to produce paperclips, after all, and why would the AI perceive there be any moral constraints? The semi-hard take-off scenario can perhaps best be seen as a hard take-off scenario with a temporary transition phase in front of it – but during this phase, it may well be unstoppable already: http://hplusmagazine.com/2014/09/26/superintelligence-semi-hard-takeoff-scenarios/

The final view is that of the hard take-off, wherein the AI becomes superintelligent in mere minutes.
One of the many articles for your perusal: http://lesswrong.com/lw/63t/requirements_for_ai_to_go_foom/

There is a colossal danger with both the semi-hard and the hard take-off scenarios, for in all likeliness, we will only get one chance at creating a superintelligence. Whether there is a transition period or not, it will be exceedingly hard – practically impossible? – to abort it. But the paperclip maximiser, as silly as it may sound, indicates that it is crucially important for such a superintelligence to ‘be like us’ and to ‘like us’. And that is a task far harder than one might think (one that Elon Musk is attempting as well with OpenAI, an open source attempt to create a safe superintelligence: https://openai.com/ ).

If a monkey, just slightly less intelligent than us, cannot even comprehend what we do, lacks even the most basic of concepts that we all share, then how unfathomable will a superintelligence be? It won’t just be ‘slightly more intelligent’, a gap comparable to that between us and elephants, ravens, whales, dolphins, or monkeys. It will be like a god, and perhaps even without the ‘like a’. It could give us immortality, and make Earth like paradise, and fulfill our every desire. That is not to say we are doomed to obsolesce, like ants writhing invisibly around the feet of humans carelessly stamping on them left and right – but the ways we might potentially keep up with such a superintelligence sound less than desirable at first glance: https://waitbutwhy.com/2017/04/neuralink.html

Of course, it all sounds so fantastical and absurd, it is probably all just nonsense… But what if it isn’t? Can we take that risk? If it isn’t nonsense, how crucial, then, is it that we develop a proper superintelligence? One that aligns with us, our morals and values, even as they evolve. One that knows what we want, even if we cannot articulate it; can you define what ‘happiness’ is, for yourself?

Perhaps it is navel-gazing nonsense. Or perhaps we are actually on the brink of creating a hopefully-benevolent god.

As always, opinions are divided. What is yours?

Main sources used:

Artificial Intelligence and Its Implications for Future Suffering

The AI Revolution: The Road to Superintelligence


http://www.aaai.org/home.html

Please rate this