Racial and Gender Bias in Artificial Intelligence-aided Decision-making

18

September

2023

5/5 (1)

Artificial Intelligence (AI) is a revolutionizing modern technology, currently evolving at a high pace and expected to contribute 15.7 trillion USD to the global economy by 2030. Through learning from and interpreting data, AI-driven machines are capable to perform tasks that are generally performed by human beings, such as autonomous decision-making. Many benefits and accompanying success stories of AI applications are widely known. Think about AI-driven recommendation systems of Netflix, Tesla’s AI-driven advancements in self-driving cars, and IBM Watson decision-making aids in healthcare for example. 

However, in the past years many people have raised their worries about the impact of AI, resulting from cases where AI systems showed bias. I am referring now specifically to bias regarding race and gender, which has occurred in different instances:

COMPAS Risk Assessment Tool

COMPAS is a risk assessment tool that is used in courtrooms across the USA, which uses an algorithm that generates scores based on different factors, including criminal history, demographics and questionnaire results. This score is generally used to assists judges in making decisions about a defendant’s freedom. Multiple studies have found that the COMPAS tool generates biased predictions, where the tool is more likely to flag Black defendants as future criminals as compared to white defendants, even when criminal history and backgrounds were similar. To be concrete:

  • “The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.”
  • “White defendants were mislabeled as low risk more often than black defendants.”

Algorithmic Social Welfare Fraud Detection in Rotterdam

Rotterdam was a pioneer in Europe in using an algorithm to detect social welfare fraud. The algorithm was used by the Municipality of Rotterdam from 2018 to 2021 to select people for a “reassessment of welfare benefits”. It was uncovered that the algorithm was subject to several problems, including generalizations based on a limited number of individuals in the data, the use of subjective variables (personal appearance), and proxy variables (language). The final selection was based on a poorly performing calculation method.

  • The chances of being invited for a reassessment increased the most if the social welfare recipient was a young, single mother who did not speak Dutch well. 

It undermined the legal certainty of financially dependent residents because they couldn’t verify the reason for a reassessment. The municipality acknowledged that the algorithm could “never remain 100 percent free from bias or the appearance of bias” and considered this an undesirable situation. In 2021, they discontinued the risk assessment model.

Discussion

These are two of many cases, and both raise questions on the use of AI and their impact on human lives. Especially, when the consequences of these decisions can be severe. Some people argue that these tools can still be valuable aids for decision-making if measurements to ensure proper control and increased transparency are taken. Others believe that these biases are inherent limitations to AI systems, as the systems are trained by human data and therefore will always be prone to human error.

What do you think? Is it possible to create AI algorithms that are free from biases such as these? How can this be achieved? Should companies and organizations using a faulty recommendation system be held accountable? Should there be legal consequences? What is the role of the government in ensuring fairness of AI systems? Should there be specific regulations in place? Lastly, considering the economic potential of AI, how should the trade-off between fostering innovation and ensuring fairness of AI systems be approached?

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *