Bias in AI-driven decision-making: Who is Accountable?

5

October

2023

No ratings yet.

My previous blog was dedicated to introducing bias in Artificial Intelligence (AI)-aided decision-making and clarifying this issue by illustrating two real-life examples of racism and gender bias in AI systems at the Municipality of Rotterdam and in COMPAS, a risk assessment tool used across courtrooms in the USA. 

The two examples have shown how AI-aided decision-making can have significant impact on human lives. The COMPAS tool falsely flagged black defendants as future criminals, while white defendants were mislabeled low risk. The fraud detection tool used by the Municipality of Rotterdam usually flagged young, single mothers who did not speak Dutch well to be reassessed on their social welfare benefits. 

In both cases, the decisions informed by AI systems undermined legal certainty and have had unfair and significant impact on individuals’ life.

This blog will be dedicated to further examining accountability regarding these bias issues. Because who is really accountable for these severe consequences of systems? 

Developers

To start with, AI systems are created by developers, who are at the core of building the algorithms that will eventually generate predictions or recommendations. While these developers may probably not intend to build an AI system that generates biased predictions, they are essentially the people who design, train, and deploy the systems. It is difficult, however, for a developer to bear the full consequences as AI systems are highly complex and grow (usually uncontrollably) by machine learning. Furthermore, the developer does not control, or even know about, the bias in historical data which AI is built upon. 

Organizational Accountability

Then, the companies that actually utilize AI systems will usually bear the reputational and legal consequences of situations where decisions have been made that undermine certain minority groups. The company should realize that there is a potential of bias in their AI systems, and act upon it by deploying policies and find tools to detect biases in their systems. Moreover, it might be smart to involve external parties that can evaluate fairness and ethic standards. They might detect see problems that have been overlooked by internal parties. Transparency is therefore a key factor in mitigating bias. Lastly, organizations should allocate proper attention to informing and training employees that use AI in making decisions about the potential biases and the organizations commitment to addressing these issues. 

Government

The government will play a crucial role in ensuring fair AI systems. These will generally encompass enforcing regulations about transparency and ethics which may otherwise be neglected as they may impose difficulties in employing systems and bear significant costs. This is crucial, as governments overall carry the duty to protect their citizens to hazards such as discrimination.

Altogether, addressing bias in AI and ensuring fairness is a multi-stakeholder issue. Accordingly, not one sole player can bear the full consequences. Each player will have to take responsibility for their personal accountability, their own regulations, and their commitment to fairness. 

Please rate this

Racial and Gender Bias in Artificial Intelligence-aided Decision-making

18

September

2023

5/5 (1)

Artificial Intelligence (AI) is a revolutionizing modern technology, currently evolving at a high pace and expected to contribute 15.7 trillion USD to the global economy by 2030. Through learning from and interpreting data, AI-driven machines are capable to perform tasks that are generally performed by human beings, such as autonomous decision-making. Many benefits and accompanying success stories of AI applications are widely known. Think about AI-driven recommendation systems of Netflix, Tesla’s AI-driven advancements in self-driving cars, and IBM Watson decision-making aids in healthcare for example. 

However, in the past years many people have raised their worries about the impact of AI, resulting from cases where AI systems showed bias. I am referring now specifically to bias regarding race and gender, which has occurred in different instances:

COMPAS Risk Assessment Tool

COMPAS is a risk assessment tool that is used in courtrooms across the USA, which uses an algorithm that generates scores based on different factors, including criminal history, demographics and questionnaire results. This score is generally used to assists judges in making decisions about a defendant’s freedom. Multiple studies have found that the COMPAS tool generates biased predictions, where the tool is more likely to flag Black defendants as future criminals as compared to white defendants, even when criminal history and backgrounds were similar. To be concrete:

  • “The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.”
  • “White defendants were mislabeled as low risk more often than black defendants.”

Algorithmic Social Welfare Fraud Detection in Rotterdam

Rotterdam was a pioneer in Europe in using an algorithm to detect social welfare fraud. The algorithm was used by the Municipality of Rotterdam from 2018 to 2021 to select people for a “reassessment of welfare benefits”. It was uncovered that the algorithm was subject to several problems, including generalizations based on a limited number of individuals in the data, the use of subjective variables (personal appearance), and proxy variables (language). The final selection was based on a poorly performing calculation method.

  • The chances of being invited for a reassessment increased the most if the social welfare recipient was a young, single mother who did not speak Dutch well. 

It undermined the legal certainty of financially dependent residents because they couldn’t verify the reason for a reassessment. The municipality acknowledged that the algorithm could “never remain 100 percent free from bias or the appearance of bias” and considered this an undesirable situation. In 2021, they discontinued the risk assessment model.

Discussion

These are two of many cases, and both raise questions on the use of AI and their impact on human lives. Especially, when the consequences of these decisions can be severe. Some people argue that these tools can still be valuable aids for decision-making if measurements to ensure proper control and increased transparency are taken. Others believe that these biases are inherent limitations to AI systems, as the systems are trained by human data and therefore will always be prone to human error.

What do you think? Is it possible to create AI algorithms that are free from biases such as these? How can this be achieved? Should companies and organizations using a faulty recommendation system be held accountable? Should there be legal consequences? What is the role of the government in ensuring fairness of AI systems? Should there be specific regulations in place? Lastly, considering the economic potential of AI, how should the trade-off between fostering innovation and ensuring fairness of AI systems be approached?

Please rate this