Part II: the ethics of Personific(AI)tion: how and why do we personify chatbots?

17

October

2023

5/5 (1)

In my previous blog (read it here), I delved into the practice of personifying virtual assistants such as Siri and ChatGPT. In this post the focus shall be on the ethical dimensions that emerge when people attribute AI with human-like personalities and tendencies.

As was discussed in the first blog post, people have the tendency to ascribe their virtual assistants to have human-like qualities, such as a personality, a name, an own voice, and most importantly: a relationship with the user. These features do have some potential positive impact, such as a feeling of companionship for lonely elderly people (HAN Redactie, 2020). Of course, the believe that the virtual assistant has these humanlike characteristics comes in varying degrees, with most of the public being aware of the fact that their virtual assistant does not actually possess any of these qualities. However, ethical considerations come into play with those who don’t. For most their relationship with the virtual assistant doesn’t surpass a thankful appreciation and form of amusement on some of the things they say. In most cases, the user feels that they are using a machine and sometimes laugh at the incorrect responses that it provides. Problems arise for those who feel as if their virtual assistant is their equal.

This type of relationship was portrayed in movies like Her and Ex Machina. They gave us realistic, although also pessimistic, insights in what these types of relationships to virtual assistants could bring us. Although their portrayal is very extreme, the connection that some feel to their virtual assistants is not far from this. Researchers found that people can imagine themselves falling in love with their virtual assistants and that some were even having sexual fantasies about them (Blutag, 2022). Of course, these aspirations can never be fulfilled. This realisation may lead to negative feelings for the user, such as misery, disappointment and depression. Not only are users that are experiencing this setting themselves up for disappointment, they can also loose connection to the real world. Which could lead their negative thoughts to spiral even more. Another worrying aspect is that behind the virtual assistants, there are big corporations, such as Google, Apple, Meta, and so on. The people that have a connection with their virtual assistant, will also trust their responses and suggestions. If they want to, the big corporations can indirectly influence the users, without them knowing and realizing this. As these businesses do not always have the users’ best interests at heart, this could have some terrible consequences.

For this reason, it is of most importance to keep the discussion on the ethical aspects of the usage of virtual assistants going. It isn’t necessarily so that virtual assistants are harmful, as long as they are used with the realisation that there are no humans on the other end of the chat and that they might be used to influence you in a subconscious way.  

References

Blutag. (2022, October 4). The Personification of Voice Assistants. Opgehaald van blu.ai: https://blu.ai/blog/the-personification-of-voice-assistants

HAN Redactie. (2020, april 21). Kletsbot gaat eenzame ouderen een actief luisterend oor bieden. Opgehaald van Han.nl: https://www.han.nl/nieuws/2020/12/kletsbot-gaat-eenzame-ouderen-een-actief-luisterend-oor-bieden/

Please rate this

2 thoughts on “Part II: the ethics of Personific(AI)tion: how and why do we personify chatbots?”

  1. This post is quite intriguing! I’ve certainly observed that I tend to frequently include “Please” or even transform “give” into “could you give” when interacting with Chat GPT. Similar to you, I’m uncertain about whether the personification of AI is a positive or negative development. This has sparked my curiosity regarding the extent to which future AIs (or LLMs in general) might tailor their responses based on the user. In the future, people might have their preferred AI companions with whom they’ve been conversing for multiple years. As LLMs continually learn about your writing style and various facets of your preferences, it’s possible that the AI will become attuned to your common queries and topics of interest. Consequently, individuals may develop a stronger connection to the AI, feeling that they are genuinely understood and heard, potentially evoking feelings and sexual attractions. To what extend do you think this is too far-fetched, or do you believe this would be plausible in the future?

    1. Hi Rutger,

      Thank you for taking the time to read my piece and leaving a comment.
      As you said, I’m not sure whether this is a positive thing or a negative thing.
      In some ways, if you’ve been conversing with your ‘personal assistant’ for years, it would be quite convenient as you could refer back to conversations you’ve had in the past or that it is tailored perfectly to your preferences. But as you say, this does have a possible side-effect of feeling more of a connection to this tool, which could have some negative consequences.
      I definitely believe that what you are describing is plausible. As we’ve even seen nowadays, humans can develop feelings for the strangest of things (trees, household objects, cars). So with such a tool, which even ‘responds’ to you, it is actually even more likely, I think. Because there are major companies behind these tools, I think it is important to keep this in mind, because they might not have the users’ best interests at heart.

Leave a Reply

Your email address will not be published. Required fields are marked *