The AI beast: it’s picking up our biases

4

October

2016

No ratings yet.

When you let Artificial Intelligence decide on the winner of a beauty contest, you probably don’t expect the system to be racist. However, with more than 6.000 applicants from over 100 countries the first AI international beauty contest was a fact. The jury was a 5 robot panel and were programmed to pick winners from the submitted photos. These were the results: the 44 winners were divided as follows: 5 were Asian, 1 was black and the rest was white (Beauty.ai, 2016).

 

I found another rather disturbing example that was highlighted by BoingBoing (2016) a couple of days ago. They did a Google image search for “Women’s professional hairstyles” and Google returned the following:

 

Screen-Shot-2016-09-30-at-11.43.55-AM

 

Then, they changes the Google Image search to “Women’s unprofessional hairstyles”, and Google returned the following:

 

Screen-Shot-2016-09-30-at-11.46.17-AM

 

Despite the exciting AI technology, these results that are based on advanced algorithms and artificial intelligence remain offensive. It might be accidental, but the need for a deeper understanding of how this bias stems from the human bias is rather crucial. According to BoingBoing (2016) it is still unclear where the bias comes from. They’ve seen that most of the ‘unprofessional’ pictures were linked to serious, aware discussion of the issue of ethnicity, hair and professional environments, while the ‘professional’ images were linked to Pinterest boards. Just think about it; these very recent examples from above imply that pale-skinned individuals are more beautiful with professional hairstyles, compared to dark-skinned people that are considered less beautiful with unprofessional hairstyles.

 

Clearly, this is only the beginning and artificial intelligence will become more part of our day-to-day lives. In the examples from above, it is obvious that the current deep learning algorithms are picking up our biases and are training the AI to a faulted next generation AI. Google is a very big firm that should represent equal rights and shouldn’t discriminate based on skin-color. Should this concern us? Shouldn’t we pay more attention to the learning algorithms, rather than fiercely focussing on adding automated AI technologies in every service and offering? Should every firm be able to just blindly adopt AI technologies?

 

If we now start with faulted and biased artificial intelligence techniques, and then gradually (over a very long period in time) people start to unconsciously accept the biased ‘reality’, our collective thoughts will eventually all become the same, right? That would be boring.

 

I do believe in the great AI possibilities, however, we should be careful about the biases we – unintentionally – program into the initial (learning) algorithms.

 

Sources:

http://winners2.beauty.ai/#win

https://boingboing.net/2016/04/06/professional-and-unprofessiona.html

 

Please rate this

2 thoughts on “The AI beast: it’s picking up our biases”

  1. Dear Jurjen, very interesting post! I fully agree with you – we should make sure that AI systems should not pick up our biases, and the only way to do that, is that we need to be aware of our own biases (which often times, we are not). Hence, before creating a AI algorithm, we should make sure to be aware of and potentially exclude our biases. It would be a shame to develop a sophisticated AI application that is biased, without being aware of it..

  2. Hi Jurjen, nice article and great examples. To me this really shows that AI isn’t all that artificial yet. It’s very much based on the the input and the learning of the program. These examples show, hopefully, unintentional instances of what bias can do to artificial intelligence and are already quite disturbing. But when people intentionally influence AI, it gets a lot worse.
    On March 23rd of this year, Microsoft launched an AI twitter bot who automatically replied and tweeted to other users. It’s name was Tay and it existed for about 24 hours, before it was quickly taken down by Microsoft. The bot used input from the tweets in which he was mentioned which was quickly picked up by the internet’s worst who then proceeded to turn an innocent experiment in a racist, anti-semitic nightmare for MS.
    I agree with your position on learning algorithms and I think it’s especially important to control the input for these algorithms. Do we want to create programs that mirror their users behavior when their users can so easily corrupt the program? I think we should be very cautious.

Leave a Reply

Your email address will not be published. Required fields are marked *