Personific(AI)tion: how and why do we personify chatbots?

17

October

2023

No ratings yet.

Does feeling the urge to add “thank you” or feeling guilty when you don’t add “please” whilst interacting with ChatGPT sound familiar to you? Well, you are not the only one. In this two-part series of blog posts, I’ll discuss the intriguing ways in which we personify virtual assistants and how this might impact our interaction with technology. This will be done by referring to articles whilst also sharing my personal insights and experiences on this subject.

The personification of chatbots is not new. Most will remember the release of Siri on iPhone back in 2011. The tasks that Siri could perform were still quite simple, such as setting an alarm or sending a text message, but it laid the base for the well-developed virtual assistant that almost every smartphone now has (Jovanovic, 2023). Siri is kind of somebody that everyone knows; they know ‘her’ name and what ‘her’ voice sounds like. Her witty answers and sometimes funny responses make people feel like they are talking to an actual person. However, Siri was definitely not the first to accomplish this, because what is widely regarded as the first chatbot, ELIZA, was created all the way back in 1966. This chatbot was designed to simulate a conversation with a therapist. The ELIZA was designed by a professor called Joseph Weizenbaum. Whilst developing ELIZA, his own assistant asked prof. Weizenbaum if he could leave the room so that she and ELIZA could chat. Even professor Weizenbaum was shocked that in such a short periode of time, an actual human could form the idea that the conversation between them and the machine needed privacy, as if they were speaking to an actual human. This was later dubbed the ELIZA impact, the tendency to believe that the activities of a machine are equal to those of a human. It is also called anthropomorphizing, or personification, in which humans attribute human characteristics when interacting with virtual assistants or machines (Soofastaei, 2021).  

This personification has come to a new height with the rise of ChatGPT. It feels more like a personal (virtual) assistant than ever. Some even call it  a “new colleague that will never leave” (Vandaag, 2023). Somebody that you can always ask questions to, whether it is to come up with inspiration, help you improve something or even just to structure your thoughts. I think that the personification of ChatGPT is likely due to the language that we use when talking about, or to, ChatGPT. To illustrate: when I use ChatGPT and discuss it with my peers I quite often refer to ChatGPT as a ‘him’ (“he told me this”, “maybe you could ask him”, etc.). It is not only the way that we talk about ChatGPT, but also the language that ChatGPT itself uses. It comes across as a friendly helper, which replies with ‘you’re welcome’ when you thank him and will even tell you something about ‘himself’. The response clearly states that ChatGPT doesn’t have personal experiences, emotions or consciousness, but with exactly these types of response, we deceive ourselves in thinking that it does.

Hopefully, this text has intrigued you and maybe even sparked some content for discussion. If so, feel free to leave a comment here, or on my second blog post, in which I shall elaborate on the subject from an ethical point of view.

References

Jovanovic, P. (2023, April 21). The History and Evolution of Virtual Assistants, from Simple Chatbots to Today’s Advanced AI-Powered Systems. Opgehaald van Tribulant.com: https://tribulant.com/blog/software/the-history-and-evolution-of-virtual-assistants-from-simple-chatbots-to-todays-advanced-ai-powered-systems/

Soofastaei, A. (2021). Introductory Chapter: Virtual Assistants. In A. Soofastaei, Virtual Assistant. doi:10.5772/intechopen.100248

NRC vandaag, (2023, januari 17). ChatGPT: je nieuwe collega die nooit meer weggaat. Opgehaald van NRC.nl: https://www.nrc.nl/nieuws/2023/01/17/chatgpt-je-nieuwe-collega-die-nooit-meer-weggaat-a4154407

Please rate this

1 thought on “Personific(AI)tion: how and why do we personify chatbots?”

  1. Really good article! It’s a bit confronting to realize how much of this I do without even thinking about it. The insights you present on how we personify virtual assistants are really thought provoking, especially when you mentioned the story about professor Weizenbaum creating ELIZA. I never realised that me thanking a chatbot was rooted in human instincts that lead us to give chatbots human attributes, neither did I know that this is described as the ELIZA impact. Again, really interesting. It really made me reflect on all the instances in which I personally addressed a chatbot, and even made me read back some conversations I had with ChatGPT. I wonder if I continue to be polite towards chatbots know that I know the actual concept behind it. Looking forward to your second blog post!

Leave a Reply

Your email address will not be published. Required fields are marked *