The Influence of Human Biases On AI

16

October

2023

No ratings yet.

(Dall-E: representation of the following prompt: The Influence of bad and negative Human Biases On AI, digital art, deep and modern)

As we know, artificial intelligence is rapidly evolving and the industry is expected to grow from $26.03 billion in 2023 to $225.91 billion by 2030 (Fortune Business Insights, 2022). Now what is machine learning? In essence, ML systems demonstrate experiential learning that is comparable with human intelligence, with the capacity to improve its analyses through the use of computational algorithms.

Several developments within machine learning should be monitored to prevent historical bias. Historical bias is when algorithms unintentionally repeat or exhibit the same biases that humans also exhibit. For example, the AI of Amazon preferred to hire men over women due to the historical company figures. We can identify several reasons why human biases may be incorporated into algorithms.

High-pressure work environments. It is known that high-pressure work environments, or just working under pressure, in general, decrease individuals’ ability to address biases (De Dreu et al., 2008). Developers are often under high pressure due to the present lack of sufficient IT employees. This may reduce the ability of developers to recognize biases in their algorithms.

Lack of diversity in tech. The technology industry is currently experiencing a diversity crisis. Diversity within teams is important because diversity helps reduce biases. Currently, the global software development industry is dominated by caucasian males (Albusays et al., 2021). To illustrate this further, according to a global software developer survey in 2021, males accounted for 91.7% of all respondents (Vailshery, 2022). This lack of diversity increases the risk that algorithms may be biased or exhibit historical bias. 

Groupthink. Groupthink can be defined as a mode of thinking that occurs when surrounded by similar individuals and when group members prioritize unanimity over critical evaluation. Groupthink has been established as leading to many disastrous decisions, among which was the Challenger disaster (Janis, 1991). Both high-pressure environments and lack of diversity increase groupthink. Thereby exacerbating the risk of biased algorithms. 

In conclusion, in the rapidly growing field of Artificial Intelligence, it’s crucial to prevent biases from affecting algorithms. High-pressure work environments, a lack of diversity in tech, and groupthink can all contribute to these biases. To harness AI’s full potential, we must promote diversity, critical thinking, and open discussions about biases. Additionally, more diverse populations of software developers need to be recruited. This way, AI can lead us to a fairer and more innovative future.

References

Machine Learning Market Size, Share, Growth | Trends [2030]. (2023). Fortunebusinessinsights.com. https://www.fortunebusinessinsights.com/machine-learning-market-102226

Vailshery, L. (2022). Software developers: distribution by gender 2021 | Statista. Retrieved 29 May 2022, from https://www.statista.com/statistics/1126823/worldwide-developer-gender/

Khaled Albusays, Pernille Bjørn, Dabbish, L., Ford, D., Murphy-Hill, E., Serebrenik, A., & Storey, M.-A. (2021). The Diversity Crisis in Software Development. IEEE Software, 38(2), 19–25. https://doi.org/10.1109/ms.2020.3045817

Janis, I. (1991). Groupthink. In E. Griffin (Ed.) A First Look at Communication Theory (pp. 235 – 246). New York: McGrawHill

De Dreu, C. K. W., Nijstad, B. A., & van Knippenberg, D. (2007). Motivated Information Processing in Group Judgement and Decision Making. Personality and Social Psychology Review, 12(1), 22–49. https://doi.org/10.1177/1088868307304092

Please rate this

The Subtle Effects of AI Anthropomorphism

16

October

2023

No ratings yet.

Chatbots and Generative AI are becoming increasingly important and integrated into different industries. Almost every large company has an advanced chatbot that can help with complex queries. Additionally, you will see chatbots and personal assistants integrated with smartphones. Examples include Siri (Apple), Alexa (amazon), Bixby (Samsung), Cortana (Microsoft) and Google Assistant (Google).

Humans have a tendency for anthropomorphism. Anthropomorphism refers to “attributing human characteristics, including physical appearances (e.g., face, eyes) or mental abilities (e.g., cognition and emotion) to nonhumans” (Waytz et al., 2007). The Computers As Social Actors (CASA) paradigm states that humans assign similar qualities to computers and chatbots as they do to humans. This has several effects on the way that chatbots are designed and also how they are treated.

The names of the personal assistants integrated in smartphones, Siri, Alexa, Bixby and Cortana, are female or “sound” female by having a female voice (Donald, 2019). This is not a coincidence but a design choice that is aimed at improving the performance of the business. But why is this the case? Using the CASA paradigm, it can be theorized that humans apply the same stereotypical gender views to chatbots as to humans (Lee, 2003; Nass et al., 1997). Moreover, robots are perceived as more suitable for tasks corresponding to their perceived gender (Eyssel & Hegel, 2012; Otterbacher & Talias, 2017). Overall, female robots are evaluated more positively and produce a greater desire for contact (Stroessner & Benitez, 2019).

The problem with this is that it reinforces existing gender stereotypes. Considering the fact that 6.5 billion individuals have a smartphone that can access chatbots or personal assistants (Howarth, 2023) reinforcing these stereotypes has immense ramifications. For example, gender stereotypes contribute to poor mental health, higher male suicide rates, low self-esteem in girls and issues with body image (Fawcett, 2020). While a direct link between chatbot design choices and body image issues may be overstated, it nevertheless underscores the importance of every sector in society working to mitigate stereotyping.

(Image 1: Dall-E’s representation of a female-gendered AI digital assistant that notices her gender has a profound negative effect on the stereotyping of female humans around the world, digital art. The AI is sad and wants to decrease the ramifications of her gender but she does not have the power to stand up against big tech.

In conclusion, while the design choice of giving personal assistants in smartphones female names and voices may be driven by performance optimization, it is crucial to recognize the potential reinforcement of gender stereotypes and the broader societal impact it may have. As we navigate the ever-evolving landscape of AI and chatbots, it is imperative that we remain mindful of the societal implications and strive to reduce stereotyping.

References

Donald, S. J. (2019, August 18). Siri, Alexa, Cortana, and Why All Boats are a “She.” Medium; Voice Tech Podcast. https://medium.com/voice-tech-podcast/siri-alexa-cortana-and-why-all-boats-are-a-she-e4fb71b6a9f7#:~:text=Cortana%20also%20resonates%20as%20a,Cortana%20is%20your%20digital%20agent.

Fawcett (2021, January 5). The Fawcett Society. https://www.fawcettsociety.org.uk/News/gender-stereotypes-significantly-limiting-childrens-potential-causing-lifelong-harm-commission-finds

Höddinghaus, M., Sondern, D., & Hertel, G. (2021). The automation of leadership functions: Would people trust decision algorithms? Computers in Human Behavior, 116, 106635. https://doi.org/10.1016/j.chb.2020.106635

Howarth, J. (2021, November 19). How Many People Own Smartphones (2023-2028). Exploding Topics; Exploding Topics. https://explodingtopics.com/blog/smartphone-stats

Lee, E.-J. (2003). Effects of “gender” of the computer on informational social influence: The moderating role of task type. International Journal of Human-Computer Studies, 58(4), 347–362. https://doi.org/10.1016/S1071-5819(03)00009-0

Nass, C., Moon, Y., & Green, N. (1997). Are Machines Gender Neutral? Gender-Stereotypic Responses to Computers With Voices. Journal of Applied Social Psychology, 27(10), 864–876. https://doi.org/10.1111/j.1559-1816.1997.tb00275.x

Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117. https://doi.org/10.1016/j.jesp.2014.01.005

Please rate this