Is AI reinforcing social biases?

2

October

2020

No ratings yet.

 

Inarguably, Artificial Intelligence has provided humans and the modern era with numerous benefits. AI algorithms specifically, is a method of machine learning where computers can learn on their own by giving instructional inputs. AI algorithms are applied in your daily life – it determines what comes up in your Facebook feed, which Netflix movie recommendations you get, the ads that you see in Gmail etc. but its uses reach beyond your personal life to sectors such as the healthcare, law enforcement, recruitment processes and such.

So here comes the problem with algorithms – they do what they are taught. Indeed, AI machine learning just reproduce what they’re taught and AI does not have social understanding. If uncarefully managed, this can lead to discrimination, racial prejudices, and reinforcement of our social biases. You might think that drawback of AI would not happen in real life, but it does. For example, Amazon used algorithms to help its recruitment process by data over the past ten years. In those times, male candidates still dominated the industry and therefore the developed algorithm actually discriminated against female candidates. Another example, more detrimental, is the use of algorithms in the criminal justice system where black defendants were assigned a higher risk score and therefore receiving heavier sentences in prison.

I believe that although the benefits of algorithms have helped humanity immensely, if not carefully managed it can bring more harm than good. As I mentioned, it can cause loss of opportunity for people in employment by displaying racial or gender biases; in a more severe manner, it can cause somebody to lose their freedom if locked up in jail due to false crime predictions. So how should such implications be solved? I believe that algorithms should be carefully tested and monitored before being used; for example by questioning whether AI machine learning is even appropriate in certain situations, and by adopting more inclusive approaches when building these systems. At the end of the day, ethics and responsible governance of AI should be at the center of decision-making, what do you think?

Works Cited

Chowdhury, R. (2019, August 2). How to stop AI from reinforcing biases. Retrieved from Accenture: https://www.accenture.com/us-en/insights/artificial-intelligence/stop-ai-reinforcing-biases

Saifee, M. (2020, January 17). Can AI algorithms be biased? Retrieved from Towards data science: https://towardsdatascience.com/can-ai-algorithms-be-biased-6ab05f499ed6

DeAngelis, S. F. (n.d.). Artificial Intelligence: How Algorithms Make Systems Smart. Retrieved from Wired: https://www.wired.com/insights/2014/09/artificial-intelligence-algorithms-2/

 

Please rate this

1 thought on “Is AI reinforcing social biases?”

  1. Hello, Stella! You have raised an interesting feature of AI indeed.

    Even though the majority of resources discuss opportunities that AI can provide, AI for sure also makes us face hidden challenges. For instance, your example with Amazon clearly states one of these challenges of hidden bias. However, what time does the company that uses AI need to understand that something is wrong with it, 10 years as Amazon? Can another mechanism, maybe another AI, analyse the activity of the first one and provide evidence whether AI is ethical or not?
    I pose these questions because many companies do not even suppose that such an advanced mechanism as AI may act wrong and they do not know how to measure AI success other than in money-terms. A lot of companies will choose to use AI as it provides cost-decreases for company or value-increases for customers and, furthermore, companies will postpone ethical decision-making till the time of crisis. Moreover, AI analyses and makes decisions on a huge amount of data using huge amounts of steps. Interestingly that due to the complex nature of the mechanism, one is not able to trace how AI exactly is making decisions on particular cases. Thus, it is hard to control and identify ethical bottlenecks.
    That is why ask – maybe one more algorithm can be created to control existing AIs, what do you think?

    However your point on the lack of justice when AI is applied can also be disputed by one simple fact. Recently AI solution was made by a Belorus digital artist Andrew Maximov that is able to unmask who are violent to protesters. You can check the video here: https://www.youtube.com/watch?v=QUIiogbtzDY Or read the article here: https://petapixel.com/2020/09/29/belarus-protesters-use-ai-to-unmask-riot-police-wearing-face-coverings/. The criminals can be deanonymized by AI, the justice can become even more sound if the AI is applied in the right cases. Though, it is indisputably hard to identify which cases are right and which are not, especially if we take into consideration the whole mechanism of AI.

    I also want to recommend this free course on AI:https://course.elementsofai.com/

Leave a Reply

Your email address will not be published. Required fields are marked *