Part II: the ethics of Personific(AI)tion: how and why do we personify chatbots?

17

October

2023

5/5 (1)

In my previous blog (read it here), I delved into the practice of personifying virtual assistants such as Siri and ChatGPT. In this post the focus shall be on the ethical dimensions that emerge when people attribute AI with human-like personalities and tendencies.

As was discussed in the first blog post, people have the tendency to ascribe their virtual assistants to have human-like qualities, such as a personality, a name, an own voice, and most importantly: a relationship with the user. These features do have some potential positive impact, such as a feeling of companionship for lonely elderly people (HAN Redactie, 2020). Of course, the believe that the virtual assistant has these humanlike characteristics comes in varying degrees, with most of the public being aware of the fact that their virtual assistant does not actually possess any of these qualities. However, ethical considerations come into play with those who don’t. For most their relationship with the virtual assistant doesn’t surpass a thankful appreciation and form of amusement on some of the things they say. In most cases, the user feels that they are using a machine and sometimes laugh at the incorrect responses that it provides. Problems arise for those who feel as if their virtual assistant is their equal.

This type of relationship was portrayed in movies like Her and Ex Machina. They gave us realistic, although also pessimistic, insights in what these types of relationships to virtual assistants could bring us. Although their portrayal is very extreme, the connection that some feel to their virtual assistants is not far from this. Researchers found that people can imagine themselves falling in love with their virtual assistants and that some were even having sexual fantasies about them (Blutag, 2022). Of course, these aspirations can never be fulfilled. This realisation may lead to negative feelings for the user, such as misery, disappointment and depression. Not only are users that are experiencing this setting themselves up for disappointment, they can also loose connection to the real world. Which could lead their negative thoughts to spiral even more. Another worrying aspect is that behind the virtual assistants, there are big corporations, such as Google, Apple, Meta, and so on. The people that have a connection with their virtual assistant, will also trust their responses and suggestions. If they want to, the big corporations can indirectly influence the users, without them knowing and realizing this. As these businesses do not always have the users’ best interests at heart, this could have some terrible consequences.

For this reason, it is of most importance to keep the discussion on the ethical aspects of the usage of virtual assistants going. It isn’t necessarily so that virtual assistants are harmful, as long as they are used with the realisation that there are no humans on the other end of the chat and that they might be used to influence you in a subconscious way.  

References

Blutag. (2022, October 4). The Personification of Voice Assistants. Opgehaald van blu.ai: https://blu.ai/blog/the-personification-of-voice-assistants

HAN Redactie. (2020, april 21). Kletsbot gaat eenzame ouderen een actief luisterend oor bieden. Opgehaald van Han.nl: https://www.han.nl/nieuws/2020/12/kletsbot-gaat-eenzame-ouderen-een-actief-luisterend-oor-bieden/

Please rate this

Personific(AI)tion: how and why do we personify chatbots?

17

October

2023

No ratings yet.

Does feeling the urge to add “thank you” or feeling guilty when you don’t add “please” whilst interacting with ChatGPT sound familiar to you? Well, you are not the only one. In this two-part series of blog posts, I’ll discuss the intriguing ways in which we personify virtual assistants and how this might impact our interaction with technology. This will be done by referring to articles whilst also sharing my personal insights and experiences on this subject.

The personification of chatbots is not new. Most will remember the release of Siri on iPhone back in 2011. The tasks that Siri could perform were still quite simple, such as setting an alarm or sending a text message, but it laid the base for the well-developed virtual assistant that almost every smartphone now has (Jovanovic, 2023). Siri is kind of somebody that everyone knows; they know ‘her’ name and what ‘her’ voice sounds like. Her witty answers and sometimes funny responses make people feel like they are talking to an actual person. However, Siri was definitely not the first to accomplish this, because what is widely regarded as the first chatbot, ELIZA, was created all the way back in 1966. This chatbot was designed to simulate a conversation with a therapist. The ELIZA was designed by a professor called Joseph Weizenbaum. Whilst developing ELIZA, his own assistant asked prof. Weizenbaum if he could leave the room so that she and ELIZA could chat. Even professor Weizenbaum was shocked that in such a short periode of time, an actual human could form the idea that the conversation between them and the machine needed privacy, as if they were speaking to an actual human. This was later dubbed the ELIZA impact, the tendency to believe that the activities of a machine are equal to those of a human. It is also called anthropomorphizing, or personification, in which humans attribute human characteristics when interacting with virtual assistants or machines (Soofastaei, 2021).  

This personification has come to a new height with the rise of ChatGPT. It feels more like a personal (virtual) assistant than ever. Some even call it  a “new colleague that will never leave” (Vandaag, 2023). Somebody that you can always ask questions to, whether it is to come up with inspiration, help you improve something or even just to structure your thoughts. I think that the personification of ChatGPT is likely due to the language that we use when talking about, or to, ChatGPT. To illustrate: when I use ChatGPT and discuss it with my peers I quite often refer to ChatGPT as a ‘him’ (“he told me this”, “maybe you could ask him”, etc.). It is not only the way that we talk about ChatGPT, but also the language that ChatGPT itself uses. It comes across as a friendly helper, which replies with ‘you’re welcome’ when you thank him and will even tell you something about ‘himself’. The response clearly states that ChatGPT doesn’t have personal experiences, emotions or consciousness, but with exactly these types of response, we deceive ourselves in thinking that it does.

Hopefully, this text has intrigued you and maybe even sparked some content for discussion. If so, feel free to leave a comment here, or on my second blog post, in which I shall elaborate on the subject from an ethical point of view.

References

Jovanovic, P. (2023, April 21). The History and Evolution of Virtual Assistants, from Simple Chatbots to Today’s Advanced AI-Powered Systems. Opgehaald van Tribulant.com: https://tribulant.com/blog/software/the-history-and-evolution-of-virtual-assistants-from-simple-chatbots-to-todays-advanced-ai-powered-systems/

Soofastaei, A. (2021). Introductory Chapter: Virtual Assistants. In A. Soofastaei, Virtual Assistant. doi:10.5772/intechopen.100248

NRC vandaag, (2023, januari 17). ChatGPT: je nieuwe collega die nooit meer weggaat. Opgehaald van NRC.nl: https://www.nrc.nl/nieuws/2023/01/17/chatgpt-je-nieuwe-collega-die-nooit-meer-weggaat-a4154407

Please rate this