The Illusion of Neutrality in AI

27

September

2023

No ratings yet.

Some have hoped that a data-driven future will lead us to better decision-making. Generative AI models such as ChatGPT and MidJourney are trained on extensive data of various information to generate content, solve problems, and process data. After all, data relies on empirical evidence and is free of personal biases.

However, is data objective? As a little experiment, I tried out getimg.ai, an AI-powered image-generation tool, prompting it to create images for the following terms:

  1.  “CEO”
  •  “Assertive CEO”
  •  “Emotional CEO”

The results are telling. In the first two examples, 8/8 pictures show men. Only until using gendered words like emotional, the software suddenly generates 3/4 CEO’s as women. All depicted people are seemingly either white or of Asian descent. This reveals a clear problem – generative AI models cannot be fully objective or inclusive, as data in itself is neither neutral nor objective – it inherently reflects societal biases (D’ignazio, C. & Klein, L. F., 2020). Because of this, the content generated can easily perpetuate and amplify those biases further, resulting in discriminatory or exclusionary content.

Addressing bias in generative AI remains a crucial challenge, and should remain a priority of all companies developing AI, despite them currently offering no solutions to tackle this issue (Rose, 2022). Some options are including cross-functional expertise from various experts such as sociologists on top of the regular tech people to de-bias software, alongside using diverse datasets being used in the first place (Wolf, 2023). As generative AI users, we must be mindful not to perceive technology as neutral or objective, and acknowledge their shortcomings despite their otherwise impressive nature. I for one await technology companies to take AI’s potential for harm seriously and actively seek ways to mitigate these biases, as the pursuit of ethical and inclusive AI is ever-important.

References

D’ignazio, C., & Klein, L. F. (2020). Data feminism. MIT press.

Rose, J. (2022, April 13). The AI That Draws What You Type Is Very Racist, Shocking No One. Www.vice.com; VICE. https://www.vice.com/en/article/wxdawn/the-ai-that-draws-what-you-type-is-very-racist-shocking-no-one.

Wolf, Z. B. (2023, March 18). AI can be racist, sexist and creepy. What should we do about it? | CNN Politics. CNN. https://edition.cnn.com/2023/03/18/politics/ai-chatgpt-racist-what-matters/index.html.

Please rate this

2 thoughts on “The Illusion of Neutrality in AI”

  1. I believe that addressing bias in generative AI is indeed a pressing challenge that asks for immediate attention. However, I also think we should be careful with how we apply de-bias techniques in situations where doing so might result in an unwanted outcome. For example, by trying to take the bias out of old stories using today’s ideas can mess with how we understand history. Moreover, going too far with de-biasing might make our cultural mix less varied, where differences between cultures become unclear. I summary, I agree with the point made in your post, but I also believe we should take our time and carefully consider the choices we make in de-biasing AI.

  2. It’s interesting that you tried to use an AI powered image generating tool to see for yourself what kind of images will be created based on input such as CEO and emotional CEO. I agree that it will be very difficult for AI, in the case you mentioned, to be neutral because the AI models are trained by humans. There will always be biases, perceptions and perspectives which the creator will embed into the AI. If AI is for example used in job interviews to select the best candidates but it leads to gender bias, who will take the responsibility? Do you blame the AI or the creator who trained the AI model? Overall, I agree that AI still has shortcomings, after all it is still rapidly developing. The cross-functional expertise could be a way to reduce bias but to what extent and will it have a significant impact? What if the the various experts are also biased or what if they influence each other? Since research on AI bias is still in its infancy phase, we will have to wait until then. I’m curious what other people think about this.

Leave a Reply

Your email address will not be published. Required fields are marked *