The Influence of Human Biases On AI

16

October

2023

No ratings yet.

(Dall-E: representation of the following prompt: The Influence of bad and negative Human Biases On AI, digital art, deep and modern)

As we know, artificial intelligence is rapidly evolving and the industry is expected to grow from $26.03 billion in 2023 to $225.91 billion by 2030 (Fortune Business Insights, 2022). Now what is machine learning? In essence, ML systems demonstrate experiential learning that is comparable with human intelligence, with the capacity to improve its analyses through the use of computational algorithms.

Several developments within machine learning should be monitored to prevent historical bias. Historical bias is when algorithms unintentionally repeat or exhibit the same biases that humans also exhibit. For example, the AI of Amazon preferred to hire men over women due to the historical company figures. We can identify several reasons why human biases may be incorporated into algorithms.

High-pressure work environments. It is known that high-pressure work environments, or just working under pressure, in general, decrease individuals’ ability to address biases (De Dreu et al., 2008). Developers are often under high pressure due to the present lack of sufficient IT employees. This may reduce the ability of developers to recognize biases in their algorithms.

Lack of diversity in tech. The technology industry is currently experiencing a diversity crisis. Diversity within teams is important because diversity helps reduce biases. Currently, the global software development industry is dominated by caucasian males (Albusays et al., 2021). To illustrate this further, according to a global software developer survey in 2021, males accounted for 91.7% of all respondents (Vailshery, 2022). This lack of diversity increases the risk that algorithms may be biased or exhibit historical bias. 

Groupthink. Groupthink can be defined as a mode of thinking that occurs when surrounded by similar individuals and when group members prioritize unanimity over critical evaluation. Groupthink has been established as leading to many disastrous decisions, among which was the Challenger disaster (Janis, 1991). Both high-pressure environments and lack of diversity increase groupthink. Thereby exacerbating the risk of biased algorithms. 

In conclusion, in the rapidly growing field of Artificial Intelligence, it’s crucial to prevent biases from affecting algorithms. High-pressure work environments, a lack of diversity in tech, and groupthink can all contribute to these biases. To harness AI’s full potential, we must promote diversity, critical thinking, and open discussions about biases. Additionally, more diverse populations of software developers need to be recruited. This way, AI can lead us to a fairer and more innovative future.

References

Machine Learning Market Size, Share, Growth | Trends [2030]. (2023). Fortunebusinessinsights.com. https://www.fortunebusinessinsights.com/machine-learning-market-102226

Vailshery, L. (2022). Software developers: distribution by gender 2021 | Statista. Retrieved 29 May 2022, from https://www.statista.com/statistics/1126823/worldwide-developer-gender/

Khaled Albusays, Pernille Bjørn, Dabbish, L., Ford, D., Murphy-Hill, E., Serebrenik, A., & Storey, M.-A. (2021). The Diversity Crisis in Software Development. IEEE Software, 38(2), 19–25. https://doi.org/10.1109/ms.2020.3045817

Janis, I. (1991). Groupthink. In E. Griffin (Ed.) A First Look at Communication Theory (pp. 235 – 246). New York: McGrawHill

De Dreu, C. K. W., Nijstad, B. A., & van Knippenberg, D. (2007). Motivated Information Processing in Group Judgement and Decision Making. Personality and Social Psychology Review, 12(1), 22–49. https://doi.org/10.1177/1088868307304092

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *