Technology of the Week – The Housing Industry

5

October

2017

5/5 (3)

The video below describes how online platforms revolutionized the housing industry and the way in which house owners, house buyers, and tenants connect with each other in the Dutch housing market:

Group 45: Rosanne Baars, 406184 ; Roy Ouwerkerk, 459406 ; Yuxin Sun, 406080 ; Pieter Vreke, 372189


1. History

The first real estate brokers in the Netherlands arose in 1284. They acted as the connecting party between trading partners. They earned money from commissions. In the home-rental industry, homeowners initially connected with tenants via physical notes in public spaces. In the 19th century housing corporations and social housing emerged. Consequently, private homeowners struggled to find tenants and started using the intervention of brokers, who received a fee for every contract signed (Van den Elzen, 2013).

2. Current Situation

Decades later, the rise of two-sided online brokerage platforms completely changed the way in which homeowners and tenants communicate. The emergence of these platforms weakened the role of offline brokers, which has several benefits:

  1. For online brokerage platforms, the physical infrastructure and assets that offline brokers use is no longer needed.
  2. Building and scaling networks became cheaper.
  3. Homeowners have access to a larger customer base.
  4. Tenants have access to a larger number of houses and it is easier for them to compare, due to more transparency.
  5. Transaction costs decreased, since most of the physical communication is replaced by online communication.

These eventually led to a decrease in effort and time needed for the rental process. However, for the process of buying/selling a house, offline brokers still coexisted along with online platforms, because buying a house has a large impact on people’s lives, which increases one’s willingness to pay (Bloomberg, 2013).

3. Platform Properties

Current housing platforms have several properties:

  1. A triangular structure, composed of four parties:
          Demand side users: Tenants or buyers
          Supply side users: Homeowners
          Platform providers: Online platforms/communities
          Platform sponsors: Technology providers
          Platform providers and platform sponsors are mostly employees the same company
          (Eisenmann, Parker, & van Alstyne, 2009)
  2. Strong cross-side network effects. A large number of house-owners offering houses on a website attracts tenants, and vice versa.
  3. Subsidies for either the demand or supply side of the platform, while charging the other side. In this way, more users are attracted and network effects increase. The reason why the same part of the platform is not charged consistently, is that different platforms target different niches with different willingness to pay.
  4. Interoperability; many platforms redirect demand side users to related platforms.
  5. Targeting of niches, who have needs for different features.
  6. Low homing costs. Subscription fees are reasonably priced and currently reducing.

4. Future Expectations

Housing platforms are expected to change in the following way:

  1. The number of housing platforms is expected to keep increasing due to existence of niches and low homing costs niches exist and homing costs are low. At the same time, population growth, internationalization and increasing transparency may lead to an increase in housing rental as opposed to buying property (Independent, 2016).
  2. Platform’s profit margins may increase, since increasing demand and decreasing supply for housing rental might lead to increased willingness to pay of demand side users (De Volkskrant, 2014). However, increasing competition among platforms might drive margins down. These two effects can eventually cancel each other out.
  3. Smart-home devices will increase efficiency for both landlords and tenants (Independent, 2016). This will make property management easier, since it might eliminate physical communication and provides more information.
  4. Increasing use of big data analytics. For example, Housing Anywhere is experimenting with this. (Statsbot, 2017).
  5. Convergence of the house rental and real estate industries, because house buyers might get more comfortable with online approaches (Harvard Business Review, 2016).
  6. Companies from adjacent markets may envelop incumbents.

References

Bloomberg. (2013, March 8). Why Redfin, Zillow, and Trulia Haven’t Killed Off Real Estate Brokers. Retrieved from Bloomberg.com: https://www.bloomberg.com/news/articles/2013-03-07/why-redfin-zillow-and-trulia-havent-killed-off-real-estate-brokers

De Volkskrant. (2014, November 2014). De kloof met de Randstad is niet meer te dichten. Retrieved from De Volkskrant: https://www.volkskrant.nl/binnenland/de-kloof-met-de-randstad-is-niet-meer-te-dichten~a3780161

Eisenmann, T., Parker, G., & van Alstyne, M.W. (2009). Opening Platforms: How, When and Why? Platforms, Markets and Innovation, Gawer, A. (ed.), Northampton, MA: Edward Elgar, 131-162.

Harvard Business Review. (2016, November 17). Real (estate) disruption: how technology may change the housing market. Retrieved from Harvard Business Review: https://rctom.hbs.org/submission/real-estate-disruption-how-technology-may-change-the-housing-market/

Independent. (2016, August 10). How technology could revolutionise the future of renting. Retrieved from Independent: http://www.independent.co.uk/money/how-technology-could-revolutionise-the-future-of-renting-smart-meter-landlord-bills-a7182306.html)

Rabobank. (2017). Rental housing: Rising prices in a high-potential market. Retrieved from Rabobank: https://www.rabobank.nl/bedrijven/cijfers-en-trends/vastgoed/real-estate-report-2017/sub-markets/rental-housing

Statsbot. (2017). Housing Anywhere discovered the best way to share data across a team and help them stay on track with key metrics. Retrieved from Statsbot: https://statsbot.co/customers/housinganywhere

Van den Elzen, W. (2013). The future of the Dutch housing corporations.

Please rate this

Putting the E in Education

23

September

2017

5/5 (2)

In the first lecture, we spoke of Porter, and of how companies should or should not have an ‘internet strategy’. Porter argued that companies should integrate the internet into their strategy, and thus, not have an internet strategy. This, however, was an argument he made in 2001, and we brought up the examples of Instagram and Uber as counter-points. The reading material for the previous lecture expanded upon ‘digital strategy’, and in this disagrees with Porter’s arguments. As the article considers why some companies are digitally mature, while others are not, I wondered. For if digital strategy encompasses an attitude change, valuing agility and creativity and initiative – if it is, thus, much more than merely incorporating the digital world into a pre-existing strategy, if it is more than merely maintaining a Twitter page…

Then where is the digital strategy left in the education system?

To be sure, schools – for teenagers – proudly announce the purchase of iPads and laptops, instead of books, and homework is offered online instead of in printed textbook. Grades and timetables can be seen on special platforms. IT, and digitalisation, seem to be selling points for schools, these days.

And yet…

That is reminiscent of Porter’s argument, isn’t it?

For nothing fundamental has changed. Schools operate exactly the same as they have always done. Coated in a digital layer, yes, but underneath, nothing has changed. Whereas in the business world, if the articles are to be believed, it is pretty clear that a digital strategy is so much more – and so much more successful, too, if not outright necessary for survival.

It is my opinion that digitalisation may prove to be a very necessary boon to education. Before expanding upon that, I will write of how hard it is to find any concrete data that may be used to improve education across the board – not only in one school, but systematically and fundamentally – and accentuate some of the problems that, in the perception of some, plague modern-day (secondary, in specific) education systems. Lacking any large-scale concrete and comprehensive research, perhaps it is simply up to the individual – to you – to consider how or if you want education to evolve.

PROBLEMS WITH RANKINGS

I have had thoughts on this topic for some years. I have always argued that secondary education – the education given from roughly age twelve to age eighteen – is the most inefficient and useless construction that exists. The naive whining of a teenager, perhaps, unhappy with long days of school, easily dismissed with a smile and a pat on the head. I cannot back up my opinions with statistics or facts, for the many education systems of the world differ very, very much, and there is a wealth of difference between even the very highest ranked systems. One might also question the methodology behind these rankings; if one looks at the rankings of universities, one will find that they are ranked by very Americacentric standards, including sport teams and attached research complexes. In my eyes, that is not at all the primary concern of a university, and rather irrelevant to the learning process, but such factors might be important to others, of course. It just goes to demonstrate that these rankings are rather subjective.

However, I am mainly speaking of secondary education here, and of rankings of nation-wide systems. There are a multitude of rankings here, too, and again, not without their own problems. Some include tertiary education – but a list of ‘top quality universities’ seldom ranks how much and how fast one learns, nor how relevant this is – while others exclusively look at test scores – but is the stereotypical South Korean culture really desirable? Therefore, I do not think it is useful to pick a ranking, compare it to the prevalence of a digital strategy, and discuss what one can learn from this.

Even so, it is worth mentioning a few factors that I see amongst countries that consistently rank highly, purely to indicate how little one can conclude from these rankings; this would hardly be a good article if I didn’t at least give some reasons for dismissing what comes closest to a theoretical background from which we might draw conclusions. One factor, often mentioned for Finland, is the absence of homework. Finland has ranked first occasionally – but France has ranked second occasionally, and French teachers assign a lot of homework to their students. Another reason I have seen for the success of the Finnish education system, is the absence of different ‘levels’, dividing children based on how intelligent they are (in the Netherlands, for instance, there are roughly three levels, those being VMBO, Havo, and VWO). Yet the failure of the USA’s education is often ascribed to a lack of such levels, to treating everyone the same and pretending everyone can be the best of the class if they only work hard enough. As is apparent, these rankings are full of contradictions and personal biases upon closer inspection – but let it be noted that this entire article, too, is one of personal bias.

Personally, I think class sizes, teacher-to-student ratios, and ‘freedom’ are more important qualifications than grades or the amount of people in tertiary education. Indeed, it should be realised that not everyone needs to enter tertiary education, as people in the USA are realising if only by their inability of paying the outrageous costs. And in the Netherlands, wages of plumbers have soared, and there are shortages for many more such jobs – elderly care, for example – as people push themselves into higher and higher education. Not everyone needs to be trained for management or aeronautics, as without trash collectors the world would be a far worse place. Besides, the way one enters tertiary education differs per country; some demand minimum grade point averages, others demand qualifications such as the GMAT, and yet others demand a diploma from a certain level of secondary education – and then there are some universities that limit their student body, by offering only a predetermined amount of places in their courses. Countries and cultures differ.

Grades, too, are a wholly problematic factor to take into account; one might convert the 20-scale grading of the French to the 10-scale grading of the Netherlands, and similarly convert the letter-based grading of the UK and the USA as well, but does that actually paint an accurate picture? Not at all. For instance, the French literally never grade something with 20/20, for ‘nothing is perfect’. And in the Netherlands, an 8.5/10 is equal to the very highest grades of the UK and the USA. Not because Dutch students are dumber – or are we? – but because grading culture is simply different (for example: https://www.studyinholland.nl/documentation/grading-systems-in-the-netherlands-the-united-states-and-the-united-kingdom.pdf ). One might think that this is still relatively easy to account for, but explain that to the universities in the USA that require Dutch students to have an average grade of 9/10 or 10/10. From a Dutch perspective, that is utter lunacy.

But even if we could perfect this, there is still the cultural issue to consider; do we want our (hypothetical, I presume) children to spend their entire days – including evenings – at school, or working for school, just to get the highest grades possible? Do we want to pretend mental health won’t suffer under this, that such excessive competition is good, to push ourselves to the very limit of what we can achieve and beyond? You might be outraged by some of the stories that emerge from the stereotypical south-east Asian education systems.

PROBLEMS WITH EDUCATION

And it is all so useless, for what does a test do but capture an irrelevant snapshot of ourselves? What if we are sick, or menstruating, or what if we just broke up with our loved one, or what if a parent just died? We would achieve a lower test score. And what if we would take the same test twice on the same day, without any studying at all inbetween? We should achieve the exact same test score with the exact same answers, but I would bet that we actually wouldn’t. Why do we at all use tests – and not merely use them, but utterly swamp teenagers with them, with three per week being an entirely reasonable amount – to determine whether someone knows the material he or she should know sufficiently well? We all know that we mainly store all this knowledge in our short-term memory anyway – and that our final exams are not much different from the tests we received in the years (note the plural) before, so if we need to learn the same things over and over again, well, why should we not learn a day before the test and see how it goes? And if it goes wrong, we’ll have dozens of tests to make up for it, so it is better to enjoy our free time.

But what is the alternative? Speaking with a teacher, a private conversation, so that the teacher may ascertain how much we know? But that is even more prone to bias than an exam of open questions. It is also harder to standardise, harder to organise – costing far more time and labour; what will the other students do in the meantime, and how long would they all even need to wait? – and so on. And while some students might be able to better expand upon their answers, indicating that they did actually grasp a deeper principle, other students might grow nervous and anxious, and preform worse. What, indeed, can one offer as an alternative to tests? It is a fact that most multiple-choice tests are the single worst method of measurement, teaching only rote memorisation and often relying more on literary tricks to confuse the student and on the guessing ability of said student than on actual knowledge – even the very author of multiple-choice tests indicated that – but they are also the easiest to grade and not prone to any bias at all (one of the many, many articles discussing multiple-choice tests: https://www.theguardian.com/commentisfree/2013/nov/12/schools-standardized-testing-fail-students ).

It makes sense to not do any homework, for apart from perhaps mathematics, it is not as if homework actually adds anything useful. With mathematics, the homework often resembles the test, and you might need to develop a certain proficiency in how to tackle a problem. On the other hand, with history, for instance, one is just scouring the text to find this or that date, which one will promptly forget, and one will then eventually need to relearn these dates for a test. It is all an exercise in futility and best ignored, despite what teachers profess.

Or is it? Does it actually make sense? A study often cited is a meta-analysis from Harris Cooper ( http://journals.sagepub.com/doi/abs/10.3102/00346543076001001 ), and this casts a different light on the above paragraph. This study shows a correlation between doing homework and achieving higher grades – but it is a very weak correlation, for one, and wouldn’t it make sense that motivated students, motivated enough to do their homework, are also motivated enough to study just a bit longer for tests? There are a plethora of articles and authorities that argue in favour or against homework, drawing upon their own personal experience as professors, even having conducted small-scale experiments in their own classrooms, but they remain anecdotal at best. As already said; Finnish education entirely lacks homework, but French education is full of it, yet they are both top contenders for various education rankings.

But then, the exact same argument, of there being little reason to do homework, applies to sitting in a classroom; can one ever look back on a day of secondary education and think ‘in class A, I learned B, and in class X, I learned Y’? One can do this for university, yes, but for secondary education? I highly doubt it. Some would argue that schools have a purpose beyond educating; they are social institutions, where children find new friends to play with and to discuss the perilous changes that come with being a teenager, growing into an adult. It is not for nothing that schools offer physical education – being physically active with sports and the like – or that more and more schools choose to only supply their cafetaria with healthy foods and drinks. Some would even argue that schools keep children off the streets, while their parents work, and that this daycare-esque function is also an important part of school.

These are but three problems, viewed from the eyes of students. Schools themselves have problems too, such as the increase of administrative work, the increase in parental demands, the decrease of teachers, the decrease of funds, or the fact that teachers, in most countries, do not enjoy a reputation similar to that of doctors. The precise factors differ per country, of course, but they are factors that influence how well an education system – hypothetical or actual – can function. The key takeaway here, I believe, is that again, studies and facts are scarce, and that it is very easy for personal biases to seap into this. As do my own, no doubt.

DIGITALISATION

So far, I have lightly touched upon the various rankings of education systems, and have exposed the differences in culture and common practices and attitudes between countries, and I have further exposed a variety of problems with education. I think the cultural aspect is very important, and that it strikes at the heart of the question of digital strategy. For what is strategy without vision? Any company would outline both of these in the same breath, on the same webpage. A question we should ask, then, is what exactly our vision is for the ideal education system?

It might be worthwhile to take a trip through history, though I will do so only very quickly. A multitude of articles can be found online, most of them written in the USA, speaking of how the current education system originates from the industrial era. An era were rote memorisation, obeying orders, and mindlessly doing the same tasks over and over and over again, were more valuable than they are now. For in the modern world, we value almost the exact opposite; incentive, intelligence, creativity, and freedom, for example. This, however, paints too stark a contrast, and though I could dedicate paragraphs to this, I would instead advise you to do your own research, should you wish to do so (or you could read a random article, but be aware of biases: http://hackeducation.com/2015/04/25/factory-model ).

One might argue that some teachers in secondary education are still prone to exiling students from the classroom the moment their authority gets challenged. One might counter-argue that some teenagers are wont to cause chaos and disrupt class if a teacher does not remove them. One might speak of the unfairness of punishing people for not doing homework, of conducting so many tests, or of a host of other things. Regardless of how fair this characterisation is, it does expose a structure that hasn’t changed in more than a century – yet how many businesses still operate under the principles of yesteryear? Merely replacing textbooks with laptops won’t change anything. Just as merely hosting a website didn’t change anything. Yet businesses were forced to change, by external forces, lest they go bust. Schools have the benefit of being state-maintained, in many cases, and they enjoy a very different status from commercial businesses, of course. But even so, there are already external forces at work, in a sense, and some schools already make use of them.

WhatsApp and Facebook facilitates the communication of students outside of school, and though largely employed for social interaction, they are also used for sharing knowledge and for answering questions pertaining to school. This adds a whole new dimension to interaction between classmates and working together on homework or projects, as do tools such as Dropbox and Google Drive. Websites such as Coursera or Khan Academy can educate people in far more subjects than any given school could, but more importantly for the moment are websites such as Google or Wikipedia; can you imagine going to libraries, buildings of brick stones, looking through dusty tomes, physical books, to do research?

A growing number of schools are changing how they approach education, making use of these facilities. Working in groups is more encouraged, partly because the modern-day labour market does so, and partly because the facilities exist to properly do so. Instead of rote memorisation, we may conduct theoretical research or manage practical projects, given the freedom to do whatever we think is best and have a professor judge our work. In this sense, secondary education is ever so slowly starting to resemble tertiary education. From my own – and my six years younger sister’s – experience, I can point at, for instance, how she was allowed to work on a research project with students from the Erasmus MC. Offering this kind of ‘real world experience’, or ‘hands-on experience’, I think, will become more prevalent.

Might we see exams being done by small groups of students, debating together over what the right answer is, even being allowed to use Google to find it, perhaps? It would closely resemble the real world, where it is not so much factual knowledge but tacit knowledge, experience, that is valued. There are even some calls to abolish rote memorisation altogether – what use does mathematics have, when you have a calculator, or language, when you have a translator and a spelling check? – but that, I believe, would be the wrong thing to do. For one, both programming – and in this, it is similar to mathematics – and languages create a certain kind of mindset, of problem-solving and of analysis. Studies show that being proficient in multiple languages has all kinds of benefits far beyond merely knowing those languages; benefits concerning cognitive tasks, or multi-tasking, or protection against Alzheimer’s, and so on (studies on this are easy to find, for example: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3583091/ ).

Even so, group work is how projects are conducted at tertiary education, and there are still (open and multiple-choice) exams there. It need not be a dichotomy. But I doubt that secondary education can replicate all that tertiary education offers – and why should it, when secondary education inherently offers a far broader and far less deep curriculum? Professors would need to have years of experience in their given field, and while we can find such professors at universities, it is different for secondary education. Besides, teaching in the classrooms of secondary education is far more interactive than giving a lecture in a university hall, and secondary education is also where teenagers grow into adults. This requires a more social skill set that professors drawn from ‘the real world’ might lack. And all this is without taking wages into account; secondary education already has large problems with attracting teachers, and this would make that problem far worse.

Perhaps we might see more ‘freedom’ at secondary education, with students not needing to attend class or to do homework depending on their average grade for a given subject. Perhaps there might be more opportunities for students to learn about their preferred subjects, through the internet, with teachers serving as a guide by indicating relevant material and answering questions. Perhaps education might then become a place of learning in the broadest of senses, with a student equally able to learn about Dutch as this student is able to learn about astronomy. In the farther away future, we might well have brain-to-brain interfaces, or at least brain-machine interfaces, completely upsetting the very concept of education. But all these ideas seem to be far-fetched, running into problems ranging from money to government mandates.

CLOSING WORDS

There are many, many ideas, that can be mentioned, and it is not the purpose of this article to deeply explore them all. To me, it seems to be clear that digitalisation can achieve great things, in a multitude of directions. I think we are witnessing small changes here and there, largely staying within the confines of the last few hundred years but also seeking to better connect with tertiary education and the labour market. More group work, thanks to the rise of the internet, and more freedom, with student and teacher being more equal. Much more might be done, but that will require the education system to be re-invented, starting from the very vision that underpins it. A task too great for any single government, I believe.

But that doesn’t mean we can’t think about our ideal situation, and ever so slowly try to move towards it. You should form your own opinion by your own research – if you wish to do so at all; it is better to hold no opinion than to hold an uninformed one – on what your ideal is. I am by no means an authority on the subject of education systems, and I do not have an ideal ready to be feasibly implemented right at this very moment. But if nothing else, you, the reader, will at least have spent some time thinking about this, and thoughts are the seeds for all change everywhere.

So what would you like to see changed? What can be improved? What are your thoughts, your ideas, your views?

Please rate this

Artificial Intelligence: How I Learned to Stop Worrying and Love Skynet

12

September

2017

4.5/5 (4)

What is the first thing we think of when we hear the phrase ‘Artificial Intelligence’ (AI)? Mechanical monsters bent on exterminating humanity, as the film industry teaches us with sci-fi movies such as Terminator and Skynet?

During the lecture on the eleventh of September, 2017, the professor asked whom had ever made use of AI. Siri was held up as an example of an AI, and that is correct – but the many, many hands that weren’t raised, were probably incorrect; does anyone not use Google’s search engine on a daily base, for instance? Artificial Intelligence is everywhere, from the spam filters on our emails to the cars we drive in daily.

AI, then, is not hardware, but software; a brain of software, using hardware as its medium, capable of connecting data through mathematical reasoning to reach new insights and conclusions.

So far, these are by and large background processes, largely invisible to the uncritical eye. This is called ‘weak AI’, or ‘Artificial Narrow Intelligence’ (ANI). Siri is an example of this, as are the various game (chess, Go) champions, the many Google products – its translator, its search engine, the spam filters on its email provider – the whole process of applications or websites recommending products, videos, friends, or songs, self-driving cars, and ever so on. As should be clear, this kind of AI is not weak in that it can barely achieve anything – one would hardly call self-driving cars simple products, child’s play to create – but it is narrow, in that this AI can only excel at a very narrowly defined task. Hence, the term Artificial Narrow Intelligence; ANI.

ANI is practically everywhere these days. But if there is a narrow AI, then surely there is a more general AI as well? Indeed; Artificial General Intelligence, AGI, also known as strong AI. A human-like AI, as smart as the average human on all tasks – not just one narrowly defined one – capable of preforming any intellectual task a human can. This AI does not exist yet – unless, of course, you feel that now would be an excellent time to expose yourself as the Terminator, dear reader?

There are two problems that we still have to tackle in order to create AGI. One concerns computational power. In this, according to the TOP500, a Chinese supercomputer, the Sunway TaihuLight, currently takes the lead. It overtook the Chinese Tianhe-2 in June of 2016 (TOP500 compiles its list every June and every November), and as of June 2017, it still claims the number one spot. It can preform 93 quadrillion floating point operations per second (petaflops), which is about thrice as much as the Tianhe-2 (33.86 petaflops). Is it more than the human brain? A whole variety of scientists rank the human brain as ranging anywhere from 10^13.5 to 10^11, but then there are also scientists that rank the human brain as an order of magnitude higher, or outright dismiss the comparison. For further reading, https://www.quora.com/How-many-FLOPS-is-the-human-brain – including the comments – might be a nice place to start, but Google is full of many wildly differing claims.

It hardly matters, for now, though. Even if a supercomputer exists that is better than the human brain, it would only be better in the amount of floating point calculations it could preform – but at what cost? The human brain requires 20 watt – the energy of a light bulb – and 115 square centimetres, which more or less fits in your hand. Green500, which ranks TOP500’s supercomputers based on their energy efficiency, gives the Sunway TaihuLight fourth place; it requires one watt for every 6,051.30 floating point operations. Times 20, that makes it able to preform 121,026 floating point operations on the same wattage the human brain runs on. That is quite a bit short from 93 quadrillion. Further, whereas the human brain fits in a handpalm, the Sunway TaihuLight fits comfortably in a room the size of a thousand square metres. Not quite the hardware the Terminator ran around with.

The second problem with reaching AGI, aside from computational power, is intelligence. There are roughly three methods that are currently attempted. We could simply copy our brain. Literally, with a 3D printer, or slightly less literally, by setting up a network of neurons that would randomly fire and not achieve much at all in the beginning. But the moment it does achieve something, such as correctly guessing that a certain picture is a muffin and not a puppy, we can reinforce this path, making it likelier to use this path in the future and therefore making it likelier to be correct in the future. With this, the one millimetre long brain of a flatworm, consisting of 302 neurons, was emulated and put into a LEGO body (because we’re all children at heart) in 2014. For comparison, the human brain consists of 100 billion neurons – but even so, as our progress increases exponentially, some have eyeballed this method to achieve success somewhere around 2030 to 2050.

A second method is to copy evolution, instead of copying the brain. The large downside of this is that evolution had billions of years to play around with us. The upside is that we are not evolution; we are not driven by random chance, and we have the actual goal of creating intelligence. Evolution might well select against intelligence, because a more intelligent brain requires more energy, that might be better used for other abilities, such as warmth. Fortunately, unlike evolution, we can directly give energy, which might be highly inefficient, but we can improve that over time.

The third method is to let our budding AI figure it out for us, by researching AI and changing its own code and architecture based on its findings. Unsupervised machine learning – but of course, we can shut it off before it quickly becomes more capable at anything the human brain can do and becomes a superintelligence, right?

Right?

You just know the answer is ‘no’ when someone poses the question.

Next to Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI), there is a third variant; Artificial Superintelligence (ASI). Most agree that a self-improving AI is the likeliest to go from ANI to AGI – and if it isn’t, there is still no reason to assume that no self-improving AI will ever come into existence. If a self-improving AI reaches general intelligence – that is, if it becomes just as capable as humans are…

Then it is inherently more capable already. It is self-improving, after all, constantly applying upgrades and fixes to improve itself even more. It has microprocessors, some of which today run at 3.6 GHz, whereas our own neurons run at a measly 200 Hz. It communicates at the speed of light, 299,792,458 metres per second, whereas our brain does so at a speed of 120 metres per second. Its physical size is scalable, allowing it to increase its memory (RAM and HDD; short-term and long-term memory), whereas our brain is confined by our skull – but we couldn’t really expand our brain anyway, because our 120 metres per second would be too slow to keep up.

There are many more factors to consider. Humans cannot work literally every second of their lives, as humans tire out and lose their focus. Humans are prone to making mistakes, doubly so when tired or unfocused; to err is human, after all. Humans cannot easily work in a variety of circumstances, such as radioactive places, deserts, or the arctic.

Then there is humanity’s biggest advantage, that has enabled humanity to take the world by storm and reshape it into our image, with towering blocks of concrete and signs of human habitation everywhere. Humans developed language, and lived in communities – tribes – allowing not individual humans, but all of humanity, to learn. Fathers would remember the lessons of their own fathers, and teach them to their peers, their sons, and his peers, and so the collective knowledge of humanity increased gradually. Writing, and alphabets, were developed to better store this knowledge. Stored in libraries, written by hand, until the printing press was invented – it is no wonder that the printing press spawned a wave of revolutions and ideas, now that every single citizen could read books and form their own opinions. Universities taught all that humanity knew, and with every generation, this collective knowledge grew exponentially. Now we have the internet and computers, allowing for unprecedented communication and storage, and if you compare the last 50 years to the 50 years before, and to the 50 years before that, and so on, it becomes quite apparent how rapidly our knowledge has grown recently (for more reading, search for Ray Kurzweil’s ‘Law of Accelerating Return’).

But we are humans, individuals still. We are not a worldwide network of interconnected computers. We have an advantage that allows us to utterly dominate the entirety of Earth – and AIs will have this advantage ten times over.

This begs the question: What is AGI supposed to be anyway? Because it is a human-centric measurement, but humans are not special; the average IQ of a human is not a universal constant, it is just another point on a graph of the average IQ of species. For an AI, it is just the next number it has to pass, after passing worms and monkeys and ravens and more. And it is a point exceedingly close to both a mentally challenged individual and Einstein, despite the vast difference we would perceive between the two.

There are three broad views that deal with the transformation from AGI to ASI. The arguments for and against these views are more philosophic than what is presented in the above paragraphs, which is why I will include an article that expands upon each of the three views for further reading, should you desire to do so.

Proponents of the soft take-off view say that the concept of self-improvement already exists; in large companies. Intel has tens of thousands of people and millions of CPUs, all of which are used to become even better at developing CPUs; it is highly specialised in this task. So, too, with an AGI; this AI will participate in our economy (which knows exponential growth as well) because it can specialise itself in something to earn money with. Money which will be used to buy the necessary means for self-improvement. This specialisation – letting others develop things for you, that you can then buy with the money you earned from your own specialised skillset – is historically speaking a great advantage of humanity, and will be more advantageous than doing everything on its own for this AI as well. The AI would have to compete with almost all of humanity and all of humanity’s resources, to be more efficient in every single thing than a any specialised human is – and an AGI, a human-level AI, is not so vastly more efficient than all of humanity together.

There is a scientific foundation for this, such as what is written in the book ‘Artificial General Intelligence 2008: Proceedings of the First AGI Conference’ (very shortly; approximately 1% of the citizens of the USA are scientists or engineers, approximately 1% of that amount devotes itself to cognitive science, we can estimate cognitive science to improve its models at a rate equal to Moore’s Law, from this we can estimate humanity’s ability at intelligence improvement, and so we can calculate that, for now, it makes sense for an AGI to work and interact with us to improve itself), but if you simply wish to read one of the many articles that deal with the subject: http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html

A second view is that of the semi-hard take-off scenario, comparable to what is often seen in movies; an AI connects to the internet and rapidly reads and learns everything present there. Perhaps you have once heard of the ‘paperclip maximiser’ scenario ( https://nickbostrom.com/ethics/ai.html ); an AI is made with an explicit purpose (produce paperclips, in this case), and as it tries to do this as efficiently as possible, it transforms the entire world – and beyond – into a paperclip factory, including all humans. That makes sense; the AI’s only goal was to produce paperclips, after all, and why would the AI perceive there be any moral constraints? The semi-hard take-off scenario can perhaps best be seen as a hard take-off scenario with a temporary transition phase in front of it – but during this phase, it may well be unstoppable already: http://hplusmagazine.com/2014/09/26/superintelligence-semi-hard-takeoff-scenarios/

The final view is that of the hard take-off, wherein the AI becomes superintelligent in mere minutes.
One of the many articles for your perusal: http://lesswrong.com/lw/63t/requirements_for_ai_to_go_foom/

There is a colossal danger with both the semi-hard and the hard take-off scenarios, for in all likeliness, we will only get one chance at creating a superintelligence. Whether there is a transition period or not, it will be exceedingly hard – practically impossible? – to abort it. But the paperclip maximiser, as silly as it may sound, indicates that it is crucially important for such a superintelligence to ‘be like us’ and to ‘like us’. And that is a task far harder than one might think (one that Elon Musk is attempting as well with OpenAI, an open source attempt to create a safe superintelligence: https://openai.com/ ).

If a monkey, just slightly less intelligent than us, cannot even comprehend what we do, lacks even the most basic of concepts that we all share, then how unfathomable will a superintelligence be? It won’t just be ‘slightly more intelligent’, a gap comparable to that between us and elephants, ravens, whales, dolphins, or monkeys. It will be like a god, and perhaps even without the ‘like a’. It could give us immortality, and make Earth like paradise, and fulfill our every desire. That is not to say we are doomed to obsolesce, like ants writhing invisibly around the feet of humans carelessly stamping on them left and right – but the ways we might potentially keep up with such a superintelligence sound less than desirable at first glance: https://waitbutwhy.com/2017/04/neuralink.html

Of course, it all sounds so fantastical and absurd, it is probably all just nonsense… But what if it isn’t? Can we take that risk? If it isn’t nonsense, how crucial, then, is it that we develop a proper superintelligence? One that aligns with us, our morals and values, even as they evolve. One that knows what we want, even if we cannot articulate it; can you define what ‘happiness’ is, for yourself?

Perhaps it is navel-gazing nonsense. Or perhaps we are actually on the brink of creating a hopefully-benevolent god.

As always, opinions are divided. What is yours?

Main sources used:

Artificial Intelligence and Its Implications for Future Suffering

The AI Revolution: The Road to Superintelligence


http://www.aaai.org/home.html

Please rate this