HitchBOT was a “hitchhiking robot”, equipped with GPS tracking, that was left somewhere in Canada to be picked up and transported somewhere else as a social experiment. When the finder of hitchBOT gets tired of him, he throws him out of the car and someone else takes him along for the ride when found. After gaining notoriety as a cute robot on the internet, the robot gained a following on Twitter. People could follow his travels online. But then… someone decapitated the little robot and left him next to the road. An outcry of frustrated and downright angry people on Twitter followed with the hashtag #hitchbot.
HitchBOT showed the world how easily people can be tricked into feeling emotions for objects. It gave us a glimpse in the human psychology when dealing with human-like technology. True, HitchBOT looked a bit like a human, because of its eyes and limbs, but it was no attempt of mimicking a human and clearly looked like a simple robot. Nonetheless, people projected consciousness and agency on an inanimate object. GIven this tendency people have built into them, how are we going to account for this in the near future?
The personification of robots is not just a psychological phenomenon either. Companies are using this tendency as well to boost their services in efficiency by making their technology more human-like. An example would be ‘Paro the robot seal’, which is a medical robot that looks like a seal. Its goal is to make dementia patients feel like they are caring for something, instead of constantly being taken care of. Another example is the integration of chatbots in news media outlets, like the one the Guardian has built last year. People can ask their virtual chatting partner for the latest news and the bot will respond with appropriate answers.
How far should we let this trend go? The more human-like technology becomes, the more we are going to trust it, the more it is going to enter the most private aspects of our lives. We shouldn’t forget that every robot or virtual chatting partner could be selling the information we provide to other parties.
A more shocking thought for me though, would be the leverage that tech companies would have once customers trust its products/services like they trust family members. Image this: Your robot pet and personal friend suddenly asks for 100 euro’s, because the firmware is out of date. Could you say no? And what if, for dramatic effect, we also say that the company in question threatens its customers by saying your friend might lose its memory if you do not upgrade?
Personally, I think consumers will need protection in the near future. Theoretically, efficient market theory dictates that the market adapts to consumer wishes, but in this case I think consumers are not capable of standing up for themselves. In my opinion our primate brains are not on the level of cold rationalism that will be required in order for us not to be manipulated by the robots and AI’s we love. Debate on robot/AI ethics and regulation is needed soon, rather than later.
To the readers of this post: Is the market capable of handling the ethical side of this dilemma? Are customers strong enough to stand up for themselves in the near future?
Brynjolfsson, E. and McAfee, A. 2017. The Business of Artificial Intelligence. Harvard Business Review.
Darling, K. (2016) ‘ Ethical issues in human-robot interaction’. Retrieved from: https://www.youtube.com/watch?v=m3gp4LFgPX0 [Accessed on October 7th, 2017]
Good, N., Wilk, K. (2016) ‘Introducing the Guardian Chatbot’ . Retrieved from: https://www.theguardian.com/help/insideguardian/2016/nov/07/introducing-the-guardian-chatbot [Accessed October 7th, 2017]
Griffiths, A. (2014) ‘ How Paro the robot seal is being used to help UK dementia patients’ Retrieved from: https://www.theguardian.com/society/2014/jul/08/paro-robot-seal-dementia-patients-nhs-japan [Accessed October 7th, 2017]
Paresh, D. (2015) ‘Hitchhiking robot that made it across Canada maimed on US road trip’. Retrieved from: http://www.latimes.com/business/technology/la-fi-tn-hitchbot-destroyed-20150803-story.html [Accessed October 7th, 2017]
Twitter (2017) Retrieved from: https://twitter.com/hashtag/hitchbot [Accessed October 7th, 2017]
Thank you for your interesting blog about this matter. To be honest, I have not been thinking about this subject until I read it here. I must say that this phenomenon is pretty scary. To answer your questions:
I personally do not think customers are able to distinguish robots from humans in the future. As robots become more and more human like, it is very easy to think of them as humans. Especially keeping the effect of machine-learning in mind, which enables robots to start ‘thinking’ and to produce knowledge instead of just reproducing the knowledge they obtained from they programmer. To prevent ethical problems, I believe this problem needs to be regulated before robots will be further designed to look and act like humans. However, governmental intervention is generally slow and people do not always accept this kind of mediation. Therefore, I believe the companies that produce these robots are responsible for the effect their product has on people and should keep this in mind when bringing a new product to the market. This is hard for companies to live up to as competition is high and therefore time to extensively test the robot is scarce.
To conclude, I believe there is no real solution for this problem yet, however I think the first step is to create awareness for the problem!
Thank you for this interesting blog! I agree that the personification of robots really is the next step in the acceptance of technology into our everyday life. I think (or hope) that we still have a while before these problems arise. At this moment, people are still a bit skeptic about robotics, but as you stated in your blog, this can change very fast. Especially when robots come in “lovable” designs such as pets.
I actually do think that it is good to see that human beings are emotional beings. Although it is unfortunate that this also makes them an easy target for bad intentions. However, I think that I find the possibility that firms will exploit human trust to gather information even more alarming than the fact that humans can develop emotions for objects. Therefore, I definitely agree that consumers need to be protected in the future. From their own feelings, but mostly from the power and intentions of firms.