Algorithmic transparency: do we really not want algorithms to discriminate?

28

September

2019

5/5 (2)

The use of algorithms and big data has increased over the past years. For example, the police are using it to detect criminality, tax authorities use it to detect fraud, and Dutch supermarket Albert Heijn uses algorithms to dynamically price its products in order to reduce food waste (AH, 2018). Sometimes, things go wrong with using algorithms. Uber’s autonomously driving car did not recognize a pedestrian and hit it (Levin & Wong, 2018), and Amazon has recently stopped with an AI recruiting tool after they found out the algorithms did not operate in a gender-neutral way (Dastin, 2018). This raises important questions regarding the ethics of using algorithms in different practices.

Algorithms are often described as a “black box” method, because we do not really get to see what algorithms do once they are executed (Brauneis & Goodman, 2018). Yet, outcomes of these algorithms have important consequences. For example, whether or not someone is entitled to social security, or will be hired for a job. Therefore, sometimes it is argued that algorithms must become more transparent, and that the source code, inputs and outputs must be revealed in order to increase trust in the algorithm (Hosangar & Jair, 2018). This could increase trust and transparency in why an algorithm has come to a certain result. This way, negative outcomes, such as discrimination, can be detected.

But the question is: aren’t algorithms meant to discriminate? I would not be very happy if I would be mistaken for a fraudster, and I am happy to know that the government tries to differentiate between quarters at high risk for criminality, to focus on prevention of crimes. It is thus exactly what algorithms must do: discriminate. Another problem with transparency of algorithms, is that, for example, fraudsters now exactly know which criteria are used to detect them. This means that transparency can lead to gaming.

If our professor would be transparent about the algorithms that are used for grading our tests, I’m sure we would suddenly all score higher just by using certain word combinations (such as “recent research has shown”) – I can’t wait until we get there! 😉

Let me know in the comments what you think!

Please rate this

3 thoughts on “Algorithmic transparency: do we really not want algorithms to discriminate?”

  1. Hello Nienke,

    Thank you for your post.

    I agree with the statement that algorithms should discriminate and I’m personally fine with the idea of that. I do not think that ALL alghorithms should become more transparant. Sure, make them more transparant if you want to prevent discrimination or other negative outcomes. But if making an algorithm more transparant would make the life of a fraudster more easy than the algorithm should stay unintelligible.

  2. HI Nienke,

    Interesting post you got there.

    In my believe, I think algorithms should be able to do what you call discriminate. I would rather not call it discrimination because in the end, the decision making power of an algorithm goes beyond the capabilities of mankind. And do not forget, algorithms are brought to life by humans who set a goal and apply constraints.

    With that, I believe the constraints should be tailored until perfection to let the algorithm do whatever it needs to do. It would help the most in criminality and police work if more leads came out of them.

    The only algorithm that needs to change is Facebook’s one… Man does everybody sees that same video over and over or is it just me?

  3. Hi Nienke,

    Thanks for your interesting post.

    Personally, I think that the government can differentiate between high and low risk neighbourhoods, but the impact the algortihmic decisons have and the way they are used to ‘improve’ the model are problematic. Often, negative feedback loops are created that emerge due to the lack of model validation, criticism and knowledge about the algorithm.

    If a high risk is more surveilled, criminals are more likely to be caught relative to the low risk neighbourhoods, which results in relative more criminals in the high risk neighbourhood. These data will be fed to the model again in order for it to learn and the neighbourhood will be labeled even more riskier. However, this gives a skewed view as the data fed to the model is unbalanced.

    To address these type of issues, transparency is necessary, not open source but in the form of an independent third party. In this way, we can avoid paving the way for fraudsters, but are still able to use algorithms in an ethical way.

Leave a Reply

Your email address will not be published. Required fields are marked *