Goal Congruence in the Age of AI

29

September

2017

No ratings yet.

Tay A.I. was an experiment conducted by Microsoft in 2016. Tay is an acronym for “Thinking About You”. Tay was an AI-driven online chat bot that was supposed to entertain people by being an always-online chatting partner. The chatbot would learn from conversations and “get smarter”. Unfortunately for Microsoft, it did not take Tay long to become a racist and generally politically incorrect “person”. It began to tweet things based on random conversations, that were not at all pre-programmed. Examples: “I hate all humans” and “9/11 was an inside job”. Naturally, Hitler was also mentioned by the derailed chat bot. Microsoft self-evidently took the AI offline soon after that.

In an article in the Guardian, cosmologist Max Tegmark(also author of the newly released book “Life 3.0: Being Human in the Age of AI” shown on a lecture slide), talked about his view on the dominant perception of AI. He mentions: “I think Hollywood has got us worrying about the wrong thing. This fear of machines turning conscious and evil is a red herring. The real worry with advanced AI is not malevolence but competence. If you have superintelligent AI, then by definition it’s very good at attaining its goals, but we need to be sure those goals are aligned with ours.” Similarly, Elon Musk said in a conference with Vanity Fair in 2014: “If its [the AI] function is just something like getting rid of email spam and it determines the best way of getting rid of spam is getting rid of humans…”. All in all, I find myself in agreement with the opinion that there needs to be more focus on AI competence and goal alignment. Tay was designed competent and neutral, but still became infected after human interaction and thus it exposed the lacking goal congruence Microsoft allowed for in the bot.

Now to put this sentiment in a business context: think of AI product recommendation systems. Companies like Netflix and Amazon spend a lot of money and manpower on their algorithms, because it directly impacts their commercial success. Netflix spending 5 billion dollars on programming reflects the value of their code. However, is there goal congruence here? We tend to think these recommendation systems are a neutral influence on us, but do they want what we want?

Personally, I think it doesn’t. Think of a scenario where recommendation engines are absolutely brilliant. They anticipate your every move online. The first couple of recommendations you really like, so you follow more. And more. At what point will we realize we are following breadcrumbs laid out for us? Once we blindly follow its recommendations, will the AI slowly derail us like we did to Tay AI? What are your opinions on this?

Anthony, A. (2017) “Max Tegmark: Machines taking control doesn’t have to be a bad thing.” [online[ Available at: https://www.theguardian.com/technology/2017/sep/16/ai-will-superintelligent-computers-replace-us-robots-max-tegmark-life-3-0 [Accessed september 25th]

Hunt, E. (2016) “Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter”
[online]Available at: https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter [Accessed september 25th]

Musil, S. (2014) “Elon Musk worries AI could delete humans along with spam” [online[ Available at: https://www.cnet.com/news/elon-musk-worries-ai-could-delete-humans-along-with-spam/ [Accessed september 25th]

O’Reilly, L. (2016) “Netfliex lifted the lid on how the algorithm that recommends you titles to watch actually works” [online[ Available at: http://www.businessinsider.com/how-the-netflix-recommendation-algorithm-works-2016-2?international=true&r=US&IR=T [Accessed september 25th]

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *