The Dangers of Algorithmic Decision-making: Biased models

20

September

2018

5/5 (1)

860x394

Due to the digital revolution, algorithms are becoming increasingly interwoven into practically every aspect of our lives. Not humans, but mathematical models are currently deciding which school you will attend, whether or not you will get hired for a job, the level of your insurance cost and whether or not you will be granted a loan. Currently, these models are even being used to decide if offenders are eligible for parole (O’Neil, 2016).

When you hear someone talking about algorithmic decision-making, you will likely hear them use the words ‘objective’, ‘neutral’, or ‘fair’. Indeed, algorithms are generally perceived as fair tools that are more objective decision-makers than us biased humans. In some cases, this is true. For example, an algorithm is never tired, never has a bad day and will not favor a candidate because he or she is similar to itself. However, the question is if we are not blindly trusting these techniques too much. After all, they are always based on human input. Even the most advanced algorithmic techniques that are able to self-improve by implementing deep-learning methods are ultimately based on previous and current human practices. If these were biased, then so is the mathematical model (Eder, 2018).

The problem is that algorithmic bias is difficult to detect, as the underlying processes are often too complex to understand or interpret. A more serious challenge is that many parties involved do not seem to be actively researching, evaluating or decreasing algorithmic bias at all (Knight, 2017). This might also be caused by extremely low awareness of this issue.

As AI becomes even more prevalent in modern life, the urgency of this societal issue increases. Considering the magnitude of the impact of this problem, it is essential that we stay critical of new technological developments so that we can try to eliminate algorithmic bias in the future.

The above mentioned problem is very clearly explained by Catherine O’Neil in her book: ‘Weapons of Math Destruction’.

Sources and recommendations for further reading:

Biased Algorithms are Everywhere, and no one seems to Care

How Can We Eliminate Bias In Our Algorithms?

 

Please rate this

3 thoughts on “The Dangers of Algorithmic Decision-making: Biased models”

  1. Great post Isabelle! I could not agree with you more that biased decisionmaking through AI is an incredibly important issue to watch out for and that we definitely rely too much on our “objective and fair” systems. The question is, of course, how to program the systems in such a way to avoid this human bias. After all, we give the input, determine how the algorithms should interprete the data, and thereby eventually decide what kind of output we want. In addition, the “black box” problem of AI only feeds this issue. I am really happy to see major companies take are keen on analzing and ultimately fixing this problem. In the last couple of days IBM and Google actually both released tools to see how their AI models work, and Microsoft and Facebook claim to still be working on similar developments.
    In case you want to keep up with the latest news these links might be interesting for you:

    https://www.research.ibm.com/5-in-5/ai-and-bias/

    https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html

  2. Hi Isabelle,

    Thank you for sharing this post, I have never thought of it before I stumbled on your blogpost and some other reading materials! In one of the articles we had to read for our information strategy course, the authors shortly mention this ‘machine bias’ and indeed identify it as a risk of AI. I agree that it is probably due to low awareness. However, even if everyone would be aware of it, the question remains: how can this issue and risk be solved? I’m curious of your opinion! I have found an interesting article that is tackling this problem as well, and they state that increased external validity and combinations of multiple datasets might solve the issue. As multiple datasets are based on input from different humans, the bias might reduce. I think it would be an interesting read for you: https://becominghuman.ai/how-to-prevent-bias-in-machine-learning-fbd9adf1198

    1. I would tend to disagree with the suggested approach (from Livia’s response) to solve this problem. The really dangerous input bias is not from a single human or from inaccurate data entry, it is the categorical bias that exists in our society as a whole. As such they are no more and no less dangerous, problematic and difficult to detect than collective human decisions.

Leave a Reply

Your email address will not be published. Required fields are marked *