To explain how the life formula developed, Max Tegmark divides life into three stages of maturity: The first stage is life 1.0, which is the simple biological life. Within its life, it is not able to improve the performance or develop further. Meaning the learning process is occurring through the next generation; through selective evolution. The second stage is life 2.0, which represents the cultural life. Yes, you have speculated correctly - we also belong to this stage. Throughout our life, we're able to learn new skills, new languages or new knowledge. The last stage is life 3.0, which represents the technological life. It is able to produce a software as well as a hardware by itself and does not depend on the everlasting evolution anymore. As we have learned in Information Strategy, machine learning and artificial intelligence belongs to life 3.0. While learning that AI and ML are catching up with humans skills like learning, calculating and remembering, it's not uncommon that the question "Which skills make ushuman?" pops up. If you might think that it's our creativity or intuition, you're wrong since by beating Lee Sedol with the KI system AlphaGo in March 2016, the system was able to show exactly these skills. What happens if a machine will be developed, which is able to change their hardware as well as software by itself? Changing the competencies throughout months, days or even minutes? Researchers are already thinking about such scenarios and concluded that it will be highly important to program the machine in such a way, that its goal goes towards human's wellbeing. But this alone does not serve as a guarantee. The worst scenario of researchers, showed that they will take over the world and humans will be seen as a threat and therefore be destroyed. In week one the article about artificial intelligence from Brynjolsson states that humans morality and humans mental state are the only things what machines won't be able to "imitate". However, I am the opinion that human's awareness is the only aspect in us, which can't be imitated. What do you think?
What makes us human?
17
October
2018
Thank you for this interesting article, Natali! Your last paragraph about the worst scenario of researchers seems to tie in nicely with one of my posts, where I talked about the television show Westworld. As much as I agree that human’s morality and mental state cannot be imitated, I’m wondering if this is even necessary for machines to pose a threat to humans in general.
As you discuss, there’s going to be a moment where machines are going to be able to change their hardware as well as their software. It’s all nice and well that the researchers are concluding that machines should be programmed so that its goal goes towards human wellbeing, but my question is: how? Will we tell them “you should not do harm towards humans” or “you should always listen to humans”? These are just some very simple commands that are going to be very, very hard to put into practice. As excited as I am about AI and ML catching up with human skills, it frightens me as well.