How to tackle gender bias in AI

21

September

2020

5/5 (1)

Gender itself and related biases are a constant discussion topic in today’s world. “Keyboard-warriors” equipped only with their wit and internet connection discuss online, tech giants start mentorship programs for underrepresented genders and provide unconscious bias training, only to fail all the same and often causing further deterioration (Wynn, 2019). Theoretically speaking the solution is simple: Be more tolerant towards others, be forgiving, be reflective of your actions and certainly accept that the world is not perfect nor completely equal and will never be. Still, fighting for improvements – whether for instance it is men’s rights in child custody law suits or equal opportunities for women in terms of career is essential. But how does this striving for a better and fairer future in terms of gender biases translate into the digital age, dominated by ever-stronger algorithms and AI?

Algorithms will determine our future and will embed themselves into our ever-day interactions more and more. Hence, it is important to design them in an inclusive and fair manner because even though they are smart, most times they still cannot think on their own. In the end, “machine learning systems are, what they eat” (Maroti, 2019). A prominent example was the Microsoft chatbot Tay which drifted into uttering nazi-language and inappropriateness rather quickly. Or why is it that many virtual personal assistants are female? (UNESCO & EQUALS, 2019). Algorithms and AI restate and copy our reality. What if this for instance influences your ability to get a bank loan (Smith, 2019) or pass the gate of an automated CV checking program?

How do we change the underlying mechanisms of algorithms without leading to discrimination on the other side of the respective spectrum? In the end, equality of opportunity should be the goal, not new inequality. Possible approaches are numerous and should be discussed thoroughly. Below you may find a few collected ideas potentially contributing to fairer AI although no solution will solve the issue at hand alone:
1) Enable more women into the MINT field. Use trainings, governmental and supra-governmental policies such as UN initiatives to equalise the playing field (Deva, 2020). Here it is important to mention that, in my mind, policies should be designed to enable a more common and fair foundation on which to compete and strive for greatness, not quotas
2) Enrich and adjust data sets. Maroti (2019), states that for instance using the same data sets twice with swapped genders can diminish potential bias in natural language processing (NLP). Therefore, optimizing data sets or using additional meta data can be a valid point as long as it does not lead to other biases or discrimination and still reflects the original data accurately
3) Collect relatively unbiased datasets. Instead of changing data at a subsequent step, choosing representative and diverse data sets can significantly improve outcome and therefore improve fairness (Feast, 2019).

As can be seen, creating a less biased, fairer digital world is possible and should therefore be pursued relentlessly! One underlying food for thought is, however, whether group conflict and oversimplification are the appropriate approaches to tackle future problems or whether we should see and reflect on ourselves for what we are – human individuals with biases.

 

 

 

 

 

References:
Deva, S, 2020, Addressing the gender bias in artificial intelligence and automation. Retrieved from: https://www.openglobalrights.org/addressing-gender-bias-in-artificial-intelligence-and-automation/

Feast, J, 2019, 4 Ways to address gender bias in Ai, Havard Business Review. Retrieved from: https://hbr.org/2019/11/4-ways-to-address-gender-bias-in-ai

Maroti, C, 2019, Gender bias in AI: building fairer algorithms, Retrieved from: https://unbabel.com/blog/gender-bias-artificial-intelligence/

Smith, C.S, 2019. Dealing with bias in artificial intelligence. New York Times. Retrieved from: https://www.nytimes.com/2019/11/19/technology/artificial-intelligence-bias.html

UNESCO & EQUALS, 2019, I’d blush if I could – Closing gender divides in digital skills through education. Retrieved from: https://unesdoc.unesco.org/ark:/48223/pf0000367416.page=1

Wynn, A, 2019, Why Tech’s Approach to Fixing its gender equality isn’t working. Havard Business Review. Retrieved from: https://hbr.org/2019/10/why-techs-approach-to-fixing-its-gender-inequality-isnt-working

Please rate this

6 thoughts on “How to tackle gender bias in AI”

  1. Hi Frederik,

    You state an important point in gender inequality! Currently, we are striving towards a world without inequality, but since machines “are what they eat”, they will probably be gender-biased. Besides that, I was thinking about humans who prefer to describe their sex as “other” or “preferably not to say”. How are these humans taken into account by machines? They cannot see any difference between man and female.

    1. Hi Anouck,
      I think you raise an interesting point. Funnily enough binary code is only depicted of 1s and 0s and nothing in between – resembling your statement regarding gender. For the sake of including these other genders while also keeping data set methods still practicable, I think trying to incorporate a third category of “diverse” will be the way to go. Although data scientists usually try to be as accurate as possible, it might not be of benefit to include all hundreds of gender options as this will potentially make most sample sizes not big enough and therefore maybe skew findings and deem them insignificant in a statistical sense.
      What would you think is a good idea to incorporate everyone into AI equally important?

  2. Dear Frederick,
    I find your article very interesting to read, and I also believe that reducing a less biased and fairer digital world is possible. With our world being more and more interconnected and everything getting digitalized, it is an opportunity to reduce some kind of inequalities – for example during CV checks. You mention whether this is too over-simplifying reality, but I believe that human will always be involved at some point in any activity – for example for an interview- and thus biases will always be present to some extent. If we can reduce it in the earliest or in some phases, I believe it is already a good step.

    1. Dear Amandine,
      thank you for your words. I agree that digitization can have some chances to reduce inequality as it enables standardization. Even though standardization as well as bureaucracy is most time connotated as something negative, in the end it sets up clear rules which should enable equality. For that, the input has to be “fair” – as you said. This, however, is difficult to achieve as some evolutionary thinking such as ingroup thinking in regards to “my tribes and your tribe” is deeply encoded. You mentioned CV checks so for me an interesting question would be in which field/topic you see the most potential to achieve a shift towards equality.

  3. Hi Frederick,
    I just read your blog, which was very interesting to read as you used a coherent style of writing and made a comprehensive overview about the topic. I thought you tackled a highly complex and difficult problem in the artificial intelligence sector which as you also mention is definitely not easy to solve. Concerning your first solution point, I think it is very important to improve enablement for women in MINT to increase equality. It might also be a good idea to further show the benefits and advantages of working in the field to increase motivation and interest in AI and programming. Enriching data sets with swapped genders might be difficult in larger data sets as it is also very time-consuming. Although in my opinion this is an unavoidable step to prevent bias from happening. Interesting to see how this issue will develop in the future.

    1. Hi Philipp,
      for sure its really difficult. As you stated, one has to consider that the root really does not lie in the algorithm per se but is again a collection of different correlations influencing each other. Therefore, one has to be careful to not skew results to the point of unusability. The adaptation of data sets e.g. regarding gender swaps is very timeconsuming and the question is also in which cases it might be more suitable than in others. For me at least, gender adaptations are a good thing in translation algorithms to prevent a “standard” male translation. In other data sets, there are just inherently more men or women which depicts reality and interest and not really inequality. A, let’s say “stereotypical”, example could be data sets regaridng employees in parfum stores. Here datasets would be inherently skewed towards females due to increased interest in this position but this would not state inequality per se. Therefore, it is important to be really specific about pointing out problem.

      Which fields would you say suffer the most from potential gender bias of algorithms?

      Best,
      Frederik

Leave a Reply

Your email address will not be published. Required fields are marked *