My previous blog was dedicated to introducing bias in Artificial Intelligence (AI)-aided decision-making and clarifying this issue by illustrating two real-life examples of racism and gender bias in AI systems at the Municipality of Rotterdam and in COMPAS, a risk assessment tool used across courtrooms in the USA.
The two examples have shown how AI-aided decision-making can have significant impact on human lives. The COMPAS tool falsely flagged black defendants as future criminals, while white defendants were mislabeled low risk. The fraud detection tool used by the Municipality of Rotterdam usually flagged young, single mothers who did not speak Dutch well to be reassessed on their social welfare benefits.
In both cases, the decisions informed by AI systems undermined legal certainty and have had unfair and significant impact on individuals’ life.
This blog will be dedicated to further examining accountability regarding these bias issues. Because who is really accountable for these severe consequences of systems?
Developers
To start with, AI systems are created by developers, who are at the core of building the algorithms that will eventually generate predictions or recommendations. While these developers may probably not intend to build an AI system that generates biased predictions, they are essentially the people who design, train, and deploy the systems. It is difficult, however, for a developer to bear the full consequences as AI systems are highly complex and grow (usually uncontrollably) by machine learning. Furthermore, the developer does not control, or even know about, the bias in historical data which AI is built upon.
Organizational Accountability
Then, the companies that actually utilize AI systems will usually bear the reputational and legal consequences of situations where decisions have been made that undermine certain minority groups. The company should realize that there is a potential of bias in their AI systems, and act upon it by deploying policies and find tools to detect biases in their systems. Moreover, it might be smart to involve external parties that can evaluate fairness and ethic standards. They might detect see problems that have been overlooked by internal parties. Transparency is therefore a key factor in mitigating bias. Lastly, organizations should allocate proper attention to informing and training employees that use AI in making decisions about the potential biases and the organizations commitment to addressing these issues.
Government
The government will play a crucial role in ensuring fair AI systems. These will generally encompass enforcing regulations about transparency and ethics which may otherwise be neglected as they may impose difficulties in employing systems and bear significant costs. This is crucial, as governments overall carry the duty to protect their citizens to hazards such as discrimination.
Altogether, addressing bias in AI and ensuring fairness is a multi-stakeholder issue. Accordingly, not one sole player can bear the full consequences. Each player will have to take responsibility for their personal accountability, their own regulations, and their commitment to fairness.