How Algorithms Discriminate Against Women: The Hidden Gender Bias

9

September

2020

5/5 (2)

In past decades, AI worries have moved from whether it will take over the world, to whether it will take our jobs. Today we have a new, and justifiably serious, concern: AIs might be perpetuating or accentuating societal biases and making racist, sexist or other prejudiced decisions.

 

Machine learning technology is inherently biased

Many believe that software and algorithms that rely on data are objective. But machine learning technology is inherently biased because it works on the fundamental assumption of bias. That is, it is biasing certain input data to map them to other output data points. Of course, there is also the option to directly modify the data that is fed in through techniques like data augmentation to enable less biased data. But there is a problem; humans consciously know not to apply certain kinds of bias, yet, subconsciously they end up applying certain kinds of bias that cannot be controlled.

 

Tech-hiring platform Gild

This being the case, it is not surprising to find hidden biases all around us in the world today. For example, let’s talk about the secretive algorithms that have become increasingly involved in hiring processes. American scientist Cathy O’Neil explains how online tech-hiring platform Gild enables employers to go well beyond a job applicant’s CV, by combining through the trace they leave behind them online. This data is used to rank candidates by ‘social capital’ which is measured through how much time they spend sharing and developing code on development platforms like GitHub or Stack Overflow. 

 

This all sounds very promising, but the data Gild shifts through also reveal other patterns. For instance, according to Gild’s data, frequenting a particular Japanese manga site is a ‘solid predictor for strong coding’. Programmers who visit this site, therefore, receive higher scores. As O’Neil points out, awarding marks for this is a large problem for diversity. She suggests ‘if, like most techdom, that manga site is dominated by males and has a sexist tone, a good number of the women industry will probably avoid it’. 

 

‘Gild undoubtedly did not intend to create an algorithm that discriminated against women. They were intending to remove human biases’

 

In the book “invisible women”, Caroline Criado Perez noted that ‘Gild undoubtedly did not intend to create an algorithm that discriminated against women. They were intending to remove human biases’. However, if managers are not aware of how those biases operate, if they are not collecting data, and if they are taking little time to produce evidence-based processes, an organisation will continue to blindly perpetuate old injustices. Indeed, by not considering how women’s lives differ from men’s, Gild’s coders accidentally created an algorithm with a hidden data bias against women. 

 

But that is not even the worst part. The worst part is that we have no idea about how bad the problem really is. Most algorithms of this kind are kept secret and protected as proprietary code. This implies that we do not know how decisions are being made and what biases they are hiding.  Perez points out, ‘The only reason we know about this potential bias in Gild’s algorithm is because one of its creators happened to tell us’. This, therefore, is a double data gap: (1)  First,  in the knowledge of the coders designing the algorithm, and (2) second, in the knowledge of society at large, about just how discriminatory these AIs can be (Perez, 2020).

 

‘The only reason we know about this potential bias in Gild’s algorithm is because one of its creators happened to tell us’

 

We need more diversity in tech to reduce the hidden gender bias

Many argue that one easy way to combat the hidden gender bias is to increase the diversity of thought through the number of women in tech. According to the World Economic Forum, currently only 22% of AI professionals globally are female, compared to 78% who are male. Additionally, at Facebook and Google, less than 2% of technical roles are filled by black employees. To remove hidden bias in algorithms, tech companies should step up their recruiting practices and increase diversity in technical roles. 

 

Do you have any other suggestions for managers to reduce hidden bias? Or have you come across a type of hidden bias? Feel free to leave a comment.

 

References:

The Guardian (2016). How algorithms rule our working lives. Retrieved from https://www.theguardian.com/science/2016/sep/01/how-algorithms-rule-our-working-lives

Perez, C. C. (2020). Invisible women Data bias in a world designed for men. New York: Abrams Press. 

Forbes (2020). AI Bias Could Put Women’s Lives At Risk – A Challenge For Regulators. Retrieved from  https://www.forbes.com/sites/carmenniethammer/2020/03/02/ai-bias-could-put-womens-lives-at-riska-challenge-for-regulators/#2201ee44534f

World Economic Forum (2020). Assessing Gender Gaps in Artificial Intelligence. Retrieved from  http://reports.weforum.org/global-gender-gap-report-2018/assessing-gender-gaps-in-artificial-intelligence/

Dogtown Media (2019). Can AI’s Racial & Gender Bias Problem Be Solved? Retrieved from https://www.dogtownmedia.com/can-ais-racial-gender-bias-problem-be-solved/

Please rate this

7 thoughts on “How Algorithms Discriminate Against Women: The Hidden Gender Bias”

  1. Hello Danielle! Indeed, the algorithms can discriminate as the employer leaves the algorithm to do all the process without checking it. A perfect example is what happened with Amazon in 2015, when its system taught itself that male candidates were preferable due to the observations created with applications on a ten year period, when more men applied than women. I believe the screening of CVs is not the best solution for filtering out candidates, and a more validated assessment should prioritize candidates such as the assessment tests. Moreover, updating algorithms from time to time ( ex: 6 months update) with new data, so your algorithm will take better hiring decisions. That is why the company should have a strong HR department which can come up with new ideas and tools.

  2. Great read! I believe that this is an extremely relevant topic in tech and should be addressed as soon as possible given the speed at which innovation within artificial intelligence is occurring. While quota hiring is quite a controversial topic in terms of how much positive change it can bring, I think it could be useful as a catalyst for having a more equal playing field in the tech industry. Let’s hope that change is coming.

  3. Hi Danielle,

    Thank you for the article! I found it very interesting to read. I think the fact that their own algorithm has a hidden bias, which they were unaware of, is quite problematic. However, this is unfortunately a real pitfall of any AI algorithm. Humans, may introduce hidden biases simply through the data they feed into algorithms because that data may hold past biases that we, as humans, are unaware of. This acts as a type of loop. However, what I think is crucial here is that we may not even have to make AI 100% unbiased. As long as it is better than the subconscious biases or preconceptions that humans hold, then it is a good alternative. This does not make it good, but it makes it better than the current solution. Women have been discriminated against subconsciously for decades, so the degree to which this now changes with AI should be evaluated. This is why I wonder to what whether or not it is performing worse than if humans would be responsible? Do you know this?

    Kind regards,

    Olivia van Aalst

    1. Hey Olivia!

      Thank you for your thoughtful comment and you raise a very interesting question. To answer it, I would say the world is getting better for (working) women and AI can complement this when used correctly. In my post, you found the example of Gild, where the AI is obviously performing worse than if humans would be responsible. But in general, if used correctly, I think algorithms could definitely work in the favour for women. For instance, Unilever also has been hiring employees using AI (+ brain games) and it is a huge success. This year they even announced that they achieved gender balance across management globally, a year ahead of the target it set itself! They now have 50% women at management level globally, up from 38% in 2010; and a non-executive Board of 45% women. I hope this answers your question.

      Kind regards,
      Danielle van Helden

      sources:
      https://www.forbes.com/sites/bernardmarr/2018/12/14/the-amazing-ways-how-unilever-uses-artificial-intelligence-to-recruit-train-thousands-of-employees/#114ed13a6274

      https://www.unilever.com/news/press-releases/2020/unilever-achieves-gender-balance-across-management-globally.html

      https://www.businessinsider.nl/unilever-artificial-intelligence-hiring-process-2017-6?international=true&r=US

  4. Hi Daniëlle,

    I find your post really interesting and I think it is very curious that you mentioned Caroline Criado Perez’s book as I am currently reading “Invisible Women” myself! In the book, the author indeed mentions other examples of hidden gender bias with a negative impact on society – I recall a particular one where she mentions that “It took until 2011 for carmakers in the US to start using crash test dummies based on the typical female body”.

    I also really liked that you mentioned that this hidden biases are not only about gender but about other aspects of diversity such as race or income level – to me, this is particularly concerning because technology has the ability to improve our lives in so many ways but we need to create it in such a way that a significant part of the population isn’t left out in one way or another!

    To answer your question, I do agree that increasing diversity in recruitment practices (especially for technical roles) plays an important part in tackling these hidden biases. However, I also believe that companies should ensure that the people who are currently driving these innovations receive appropriate training to be able to identify their possible bias and see past them!

  5. Hi Daniëlle,

    Thanks for your post!

    It is important to define fairness, because the way you define fairness impacts bias. There are over 20 definitions of fairness. Preventing bias is not an easy thing to do, as you explain in your post. Some things managers can do:
    – Work with stakeholders early to define fairness and protected attributes;
    – Apply the earliest mitigation in the Machine Learning pipeline that you have permission to apply;
    – Check for bias as often as possible using any metrics that are applicable

    You always have to make a tradeoff between bias vs. accuracy:
    If you remove bias, your accuracy generally will go down.

    There are three stages where you can intervene in the Machine Learning pipeline :
    – If you can modify the training data, then pre-processing can be used;
    – If you can modify the learning algorithm, then in-processing can be used;
    – If you can only treat the learned model as a black box and cannot modify the training data or learning algorithm, then only post-processing can be used

    Generally pre-processing is the optimal time to mitigate bias, as you are the earliest in the pipeline.

  6. Hi Daniëlle,

    I was very happy to see this post based on “Invisible Women” as I read the book myself on April this year and I am already looking forward to re-reading it another time.

    The fact that many algorithms are designed by men is definitely a problem as they fail to account for the experience and habits of women. As you mention in this post, this is not a conscious doing, it is simply due to a lack of knowledge and consciousness. The book explains an example of a woman not realizing the importance of having reserved parking spots near the office entrance until she herself got pregnant. Having said this, no one can expect a man to fully understand and account for the point of view of a woman and vice-versa, in the same way that the woman from the previous example could not understand pregnant women until she was one herself.

    I strongly support the increase of female presence in AI and the tech industry overall. However, I believe this should be complemented by a more thorough breaking-down and study of data. The author of the book points out the lack of diversity and the failure to account for it when companies collect data. For instance, it is not unusual for companies to not separate data by sex. Consequently, it is harder to observe the differences between men and women and, if there was a majority of male participation, the conclusions tend to focus mostly on men.

    Lastly, I agree on the importance of increasing diversity of not just sex, but also of ethnicities. Diversity is enriching and drives growth, benefitting society as a whole.

Leave a Reply

Your email address will not be published. Required fields are marked *