Artificial intelligence was officially born in 1956 when John McCarthy organized a workshop at the Dartmouth Summer Research Project[1]. The goal was to create an artificial intelligence being. Decades of unsuccessful trials in the attempt to create an artificial being that had an independent intuition have passed. After waiting for 60 years, this March of 2016, the artificial intelligence program AlphaGo beat Lee Sedol, the grand master in Go (the ancient Chinese board game).
But is Go such a difficult game?
Although the concept is simple, the number of moves are almost endless. Go is a game of two and the goal is to surround a total area that is larger than that of your opponent in a 19 by 19 grid. To give you a comparison, in chess 35 is the number of possible moves a player can choose from in a given turn. In Go, a player can choose from 200 different possibilities in one turn. Now, if one were to account to the total number of possible moves of a Go player in a given game, it could amount to 10 to the power of 170. So how much is 10^170? A lot, given that the number of atoms in the universe is estimated to be around 10^81. Therefore, AlphaGo has achieved a big milestone for beating the best Go human master in such a complex game with almost endless possibilities.
How did AlphaGo manage to beat Lee Sedol? Let’s start from the general idea. AlphaGo uses a deep learning technique that is programed to mimic a human neuron. Deep learning belongs to a branch of algorithms and techniques called machine learning. And Artificial intelligence can be defined as a set of algorithms and techniques that mimic the human intelligence. In short, AlphaGo is specific deep learning system of machine learning that built upon algorithms and techniques.
My guess is that deep learning is going to help us understand the human brain better and consequently develop improved techniques for learning. Why? Because deep learning operates and processes information like a human neuron. By feeding the machine a lot of data, the computer can learn to recognize and classify objects by itself[2]. This leads to the creation of the so-called artificial intuition. As a result, I can imagine a world where artificial intuition will provide out-of-the-box perspectives and guide human beings to solve grand challenges.
AlphaGo vs. Lee Sedol enlightened our species once again. Let me explain. After watching the match between AlphaGo and Lee Sedol, Go players grasp an understanding of the possibilities of deep learning. Go players were fascinated with how AlphaGo was playing because it played unlike any human has played the game before. It made moves that did not seem to make sense at first sight but were very fruitful at last. By playing with or watching AlphaGo play, players are influenced by the way the machine plays the game. In turn players are able to learn from AlphaGo by observing these new strategies, which creates a feedback loop. The master himself, Lee Sedol expressed his positive thoughts after the match “I have improved already. It has given me new ideas” referring to AlphaGo[3].
This has been the first glimpse of what AI can bring in the field of learning. Artificial intuition challenges us to think differently, which will reinforce and disrupt the way the human brain thinks time and time again. Because of this constant thinking enhancements, I believe that Al will challenge the “Homo Sapiens Sapiens” status that modern humans currently hold.
Sources:
[1] http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
[2] http://www0.cs.ucl.ac.uk/staff/d.silver/web/Publications.html
Fangning
Thanks for your post. As a Chinese who totally understand how complicated Go is, I am very excited about the achievement of AlphaGo. The blog is very informative and clear. I do not have any doubt about it. But since you mentioned the using of deep learning technique, which is programed to mimic a human neuron, I would like to add some information about another use of this technique: Google Translate. This technique is now applying to Chinese-to-English translation on the Google Translate mobile and web-based apps, and Google says that it will apply to other languages over the next few months. This translation system is called “Google Neural Machine Translation-GNMT”. Different to the translation system used before (Phrase-Based Machine Translation-PBMT), GNMT could “remember” the whole sentence from the beginning to the end and translate into a fluent sentence instead of adding words together.
Taken the sentence “李克强此行将启动中加总理年度对话机制,对加拿大总理杜鲁多举行两国总理首次年度对话” as an example.
Human translation: Li Keqiang will initiate the annual dialogue mechanism between premiers of China and Canada during this visit, and hold the first annual dialogue with Premier Trudeau of Canada.
GNMT: Li Keqiang will start the annual dialogue mechanism with Prime Minister Trudeau of Canada and hold the first annual dialogue between the two premiers.
PBMT: Li Keqiang premier added this line to start the annual dialogue mechanism with Canada Prime Minister Trudeau two prime ministers held its first annual session.
From the example, it is very obvious that GNMT has improved a lot on old Google Translation system. Same as AlphaGo, I think GNMT is another great and successful use of deep learning technique.