Death of a Celebrity Robot… and the Personification of Technology

7

October

2017

5/5 (1)

HitchBOT was a “hitchhiking robot”, equipped with GPS tracking, that was left somewhere in Canada to be picked up and transported somewhere else as a social experiment. When the finder of hitchBOT gets tired of him, he throws him out of the car and someone else takes him along for the ride when found. After gaining notoriety as a cute robot on the internet, the robot gained a following on Twitter. People could follow his travels online. But then… someone decapitated the little robot and left him next to the road. An outcry of frustrated and downright angry people on Twitter followed with the hashtag #hitchbot.

HitchBOT showed the world how easily people can be tricked into feeling emotions for objects. It gave us a glimpse in the human psychology when dealing with human-like technology. True, HitchBOT looked a bit like a human, because of its eyes and limbs, but it was no attempt of mimicking a human and clearly looked like a simple robot. Nonetheless, people projected consciousness and agency on an inanimate object. GIven this tendency people have built into them, how are we going to account for this in the near future?

The personification of robots is not just a psychological phenomenon either. Companies are using this tendency as well to boost their services in efficiency by making their technology more human-like. An example would be ‘Paro the robot seal’, which is a medical robot that looks like a seal. Its goal is to make dementia patients feel like they are caring for something, instead of constantly being taken care of. Another example is the integration of chatbots in news media outlets, like the one the Guardian has built last year. People can ask their virtual chatting partner for the latest news and the bot will respond with appropriate answers.

How far should we let this trend go? The more human-like technology becomes, the more we are going to trust it, the more it is going to enter the most private aspects of our lives. We shouldn’t forget that every robot or virtual chatting partner could be selling the information we provide to other parties.

A more shocking thought for me though, would be the leverage that tech companies would have once customers trust its products/services like they trust family members. Image this: Your robot pet and personal friend suddenly asks for 100 euro’s, because the firmware is out of date. Could you say no? And what if, for dramatic effect, we also say that the company in question threatens its customers by saying your friend might lose its memory if you do not upgrade?

Personally, I think consumers will need protection in the near future. Theoretically, efficient market theory dictates that the market adapts to consumer wishes, but in this case I think consumers are not capable of standing up for themselves. In my opinion our primate brains are not on the level of cold rationalism that will be required in order for us not to be manipulated by the robots and AI’s we love. Debate on robot/AI ethics and regulation is needed soon, rather than later.

To the readers of this post: Is the market capable of handling the ethical side of this dilemma? Are customers strong enough to stand up for themselves in the near future?

 

Brynjolfsson, E. and McAfee, A. 2017. The Business of Artificial Intelligence. Harvard Business Review.
Darling, K. (2016) ‘ Ethical issues in human-robot interaction’. Retrieved from: https://www.youtube.com/watch?v=m3gp4LFgPX0 [Accessed on October 7th, 2017]
Good, N., Wilk, K. (2016) ‘Introducing the Guardian Chatbot’ . Retrieved from: https://www.theguardian.com/help/insideguardian/2016/nov/07/introducing-the-guardian-chatbot [Accessed October 7th, 2017]
Griffiths, A. (2014) ‘ How Paro the robot seal is being used to help UK dementia patients’ Retrieved from: https://www.theguardian.com/society/2014/jul/08/paro-robot-seal-dementia-patients-nhs-japan [Accessed October 7th, 2017]
Paresh, D. (2015) ‘Hitchhiking robot that made it across Canada maimed on US road trip’. Retrieved from: http://www.latimes.com/business/technology/la-fi-tn-hitchbot-destroyed-20150803-story.html [Accessed October 7th, 2017]
Twitter (2017) Retrieved from: https://twitter.com/hashtag/hitchbot [Accessed October 7th, 2017]

Please rate this

Goal Congruence in the Age of AI

29

September

2017

No ratings yet.

Tay A.I. was an experiment conducted by Microsoft in 2016. Tay is an acronym for “Thinking About You”. Tay was an AI-driven online chat bot that was supposed to entertain people by being an always-online chatting partner. The chatbot would learn from conversations and “get smarter”. Unfortunately for Microsoft, it did not take Tay long to become a racist and generally politically incorrect “person”. It began to tweet things based on random conversations, that were not at all pre-programmed. Examples: “I hate all humans” and “9/11 was an inside job”. Naturally, Hitler was also mentioned by the derailed chat bot. Microsoft self-evidently took the AI offline soon after that.

In an article in the Guardian, cosmologist Max Tegmark(also author of the newly released book “Life 3.0: Being Human in the Age of AI” shown on a lecture slide), talked about his view on the dominant perception of AI. He mentions: “I think Hollywood has got us worrying about the wrong thing. This fear of machines turning conscious and evil is a red herring. The real worry with advanced AI is not malevolence but competence. If you have superintelligent AI, then by definition it’s very good at attaining its goals, but we need to be sure those goals are aligned with ours.” Similarly, Elon Musk said in a conference with Vanity Fair in 2014: “If its [the AI] function is just something like getting rid of email spam and it determines the best way of getting rid of spam is getting rid of humans…”. All in all, I find myself in agreement with the opinion that there needs to be more focus on AI competence and goal alignment. Tay was designed competent and neutral, but still became infected after human interaction and thus it exposed the lacking goal congruence Microsoft allowed for in the bot.

Now to put this sentiment in a business context: think of AI product recommendation systems. Companies like Netflix and Amazon spend a lot of money and manpower on their algorithms, because it directly impacts their commercial success. Netflix spending 5 billion dollars on programming reflects the value of their code. However, is there goal congruence here? We tend to think these recommendation systems are a neutral influence on us, but do they want what we want?

Personally, I think it doesn’t. Think of a scenario where recommendation engines are absolutely brilliant. They anticipate your every move online. The first couple of recommendations you really like, so you follow more. And more. At what point will we realize we are following breadcrumbs laid out for us? Once we blindly follow its recommendations, will the AI slowly derail us like we did to Tay AI? What are your opinions on this?

Anthony, A. (2017) “Max Tegmark: Machines taking control doesn’t have to be a bad thing.” [online[ Available at: https://www.theguardian.com/technology/2017/sep/16/ai-will-superintelligent-computers-replace-us-robots-max-tegmark-life-3-0 [Accessed september 25th]

Hunt, E. (2016) “Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter”
[online]Available at: https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter [Accessed september 25th]

Musil, S. (2014) “Elon Musk worries AI could delete humans along with spam” [online[ Available at: https://www.cnet.com/news/elon-musk-worries-ai-could-delete-humans-along-with-spam/ [Accessed september 25th]

O’Reilly, L. (2016) “Netfliex lifted the lid on how the algorithm that recommends you titles to watch actually works” [online[ Available at: http://www.businessinsider.com/how-the-netflix-recommendation-algorithm-works-2016-2?international=true&r=US&IR=T [Accessed september 25th]

Please rate this