Earlier this year there was some news about the controversiality of an AI artis – FN Meka, the “robot rapper”. Capitol Music Group, a well-known record label, cut ties with Factory New, the company behind this virtual Artist, because of racism portrait controversy. Although FN Meka is voiced by a human, it is partially powered by AI (in the fields including music composition, lyrics writing, chord creation, tempo and sounds, etc.).
As much as Factory New prided themself as the “first of its kind, next generation music specializing in virtual beings”, they are now seen as someone who trivializes racial and social issues by releasing music performed by a “black-looking” AI entity with lyrics concerning staying in prison and the N-word. The founder of Factory New tried to argue from the perspective that this AI is basically a black guy, indicating that the expression should have been socially acceptable. This claim is not enough to calm the backlash from this incident, seeing that Capitol had to severe ties with them still.
The true intriguing thing I find about this news, is that it portrays how, as neutral as AI is considered in terms of gender, race, age, etc., AI is not considered “eligible” when it comes to creating content and profiting on cultural stereotypes and appropriations. It might be that people don’t consider the AI artist itself to have any decision-making authority, and that the company representative is the main “creditor” of the music created, and because he happened to be white, it’s not acceptable. Yet, when there was success, people probably would give the credit to this unique AI element. The begs the question of, in the future use of AI, who is to bear the legal responsibility of this kind of inappropriate usage of AI power? Can AI ever be regarded as a autonomous entity?
Source:
Coscarelli, J. (2022, Aug 23). Capitol Drops ‘Virtual Rapper’ FN Meka After Backlash Over Stereotypes. Retrieved from The New York Times: https://www.nytimes.com/2022/08/23/arts/music/fn-meka-dropped-capitol-records.html
What an eye-catching title! I think this article is really interesting as it makes you think about how people still perceive AI as kind of sub-humans that should comply with our societal norms, when they’re only numbers represented in a nice shape. Don’t you believe then that as creators of AI we should continue to monitor them even as they develop into more mature stages of self-regulation?
Thank you for sharing this blog post. There have been many examples of AI algorithms that behave discriminatory. Women being hired less, and black patients receiving less care are some examples. Although the makers of the algorithm did not intentionally make the algorithm discriminatory, the data with which the algorithms were trained had some discrimination in them. I therefore think that companies are responsible for testing their algorithms thoroughly and see if some groups are affected negatively by the algorithm.
What an interesting article! AI comes with loads of possibilities and opportunities. I think that we as a society should accept these flaws in the early stage of developing AI. However, companies and developers who created the algorithms should be held responsible. And I think it is their duty to react in an appropriate way, by deleting for example the songs and adjusting their algorithms in a way that new flaws like the one in this case would not happen again.
It will however be inevitable to foresee these kinds of incidents since AI is in the end the one who creates it.