In my previous blog (read it here), I delved into the practice of personifying virtual assistants such as Siri and ChatGPT. In this post the focus shall be on the ethical dimensions that emerge when people attribute AI with human-like personalities and tendencies.
As was discussed in the first blog post, people have the tendency to ascribe their virtual assistants to have human-like qualities, such as a personality, a name, an own voice, and most importantly: a relationship with the user. These features do have some potential positive impact, such as a feeling of companionship for lonely elderly people (HAN Redactie, 2020). Of course, the believe that the virtual assistant has these humanlike characteristics comes in varying degrees, with most of the public being aware of the fact that their virtual assistant does not actually possess any of these qualities. However, ethical considerations come into play with those who don’t. For most their relationship with the virtual assistant doesn’t surpass a thankful appreciation and form of amusement on some of the things they say. In most cases, the user feels that they are using a machine and sometimes laugh at the incorrect responses that it provides. Problems arise for those who feel as if their virtual assistant is their equal.
This type of relationship was portrayed in movies like Her and Ex Machina. They gave us realistic, although also pessimistic, insights in what these types of relationships to virtual assistants could bring us. Although their portrayal is very extreme, the connection that some feel to their virtual assistants is not far from this. Researchers found that people can imagine themselves falling in love with their virtual assistants and that some were even having sexual fantasies about them (Blutag, 2022). Of course, these aspirations can never be fulfilled. This realisation may lead to negative feelings for the user, such as misery, disappointment and depression. Not only are users that are experiencing this setting themselves up for disappointment, they can also loose connection to the real world. Which could lead their negative thoughts to spiral even more. Another worrying aspect is that behind the virtual assistants, there are big corporations, such as Google, Apple, Meta, and so on. The people that have a connection with their virtual assistant, will also trust their responses and suggestions. If they want to, the big corporations can indirectly influence the users, without them knowing and realizing this. As these businesses do not always have the users’ best interests at heart, this could have some terrible consequences.
For this reason, it is of most importance to keep the discussion on the ethical aspects of the usage of virtual assistants going. It isn’t necessarily so that virtual assistants are harmful, as long as they are used with the realisation that there are no humans on the other end of the chat and that they might be used to influence you in a subconscious way.
References
Blutag. (2022, October 4). The Personification of Voice Assistants. Opgehaald van blu.ai: https://blu.ai/blog/the-personification-of-voice-assistants
HAN Redactie. (2020, april 21). Kletsbot gaat eenzame ouderen een actief luisterend oor bieden. Opgehaald van Han.nl: https://www.han.nl/nieuws/2020/12/kletsbot-gaat-eenzame-ouderen-een-actief-luisterend-oor-bieden/