Why and how should Metaverse ethics be established promptly?

9

October

2022

No ratings yet.

Even though we do not have the Metaverse yet, the ethical rules which accompany its implementation should be created and set as standard as soon as possible. Shaping the ethics of the Metaverse is especially vital since we will not encounter 1 single platform, but rather many of them, each of them having a different vision for designing our virtual reality. Based on early experiments with digital environments, we can expect a significant number of bullying/ harassing incidents, if no regulation is implemented as to the Metaverse ethics.

What should be prevented, is the self-regulation in form of internal ethical boards which was applied in the case of AI technology. In my opinion, we can not expect that companies that create “independent” ethical boards within the company truly have the public interest in mind. Rather, they likely won’t address issues (such as enforcing racial biases through AI) in ways that would harm their financial condition, with Facebook as an example of a company that could substantially limit the abuses on the platform with AI, but will not do so, since it would mean decreased user engagement on the platform. Thus, we can not trust that the companies themselves will address an ethical implementation of Metaverse since it is likely to collide with their financial profit.

The issue of ethics in the Metaverse should be addressed by an independent, worldwide board, which would introduce an effective oversight, taking into account the security and privacy of the Metaverse users’. In contrast to the AI being mostly governed by soft law (ethical guidelines, which don’t legally bind organizations), Metaverse should, in my opinion, be governed by hard law, as it is even more threatening to users’ privacy and safety (Jobin et al., 2019). The question, however, remains: would countries agree to adopt the hard law? And wouldn’t it limit the development of Metaverse?

Sources:

Jobin, A., Ienca, M. and Vayena, E., 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9).

Entsminger, J., 2022. Who will establish Metaverse ethics?. Project Syndicate. Available at: https://bit.ly/3Mhnnlk  (Accessed: 9th October 2022).

TechDesk, 2022.  8 things you can’t do in the metaverse: A look into this new virtual world. TheIndianEpress.Available at: https://indianexpress.com/article/technology/crypto/8-things-you-cant-do-in-the-metaverse-a-look-into-this-new-virtual-world-8156570/ (Accessed: 8th October 2022).

Please rate this

Can AI read our emotions?

9

September

2022

No ratings yet.

As new technologies become ever more widely accessible, the business are looking for ways to further automate our lives. The idea is simple (and quite convincing!)- why should we spend time on operations that AI can be trained to perform? In a lot of cases the AI/ machine learning solutions are even more effective, as no human error is at play. One of such technologies, which are revolutionizing some industries is emotion recognition (Sydorenko, 2021). Companies like start-up Emotient have produced software which, they claim, can identify people’s emotions (Crawford, 2021).

The emotion recognition tools are now being applied to quite a large array of activities: they can be used to analyze you during your job interview and airports used them to identify potentially dangerous people. Let’s take HireVue recruiting company for example- in 2014 it has released a software that analyzes job applicants’ faces and voice tones. They then compare it with the best workers in the particular company in hope that these attributes can identify employees that will bring the most value into the company (Crawford, 2021).

We could argue that systems like this are wonderful examples of how much our society can benefit from the technology. There is just one issue: all of these emotion recognition systems are based on a theory that all humans experience a few universal emotions and that they are natural, innate and not dependent on the culture we grew up in (Sydorenko, 2021). This theory hasn’t been in fact yet proven by any reliable and detailed research (Gifford, 2020). Most believe that Paul Ekman, American psychologist, has proven this theory, but the scientific circle (eg.Margaret Mead, American cultural antrophologist) has been sceptic about his research methods and assumptions (Crawford, 2021).

For me it seems very irresponsible to create tools that can, presumably, read people’s emotions and ambitions, while scientists haven’t even been able to prove that emotions can be read from human expressions or their voice. It assumes all the emotions are also expressed in the same way by all of us, which is hard to believe. In this shape, the emotion recognition technologies are often putting people in a disadvantaged position (rejecting from a job/ identifying as a dangerous individual) without any reliable reason.

Do you think it is a dangerous phenomenon? Or rather an advancement towards a more technologically operated world?

References:

Sydorenko, I. (2021, August 25). AI in Emotion Recognition: Does it work?. Label your Data https://labelyourdata.com/articles/ai-emotion-recognition 

Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. 

Gifford, C. (2020, June 15). The problem with emotion- detection technology. The New Economy. https://www.theneweconomy.com/technology/the-problem-with-emotion-detection-technology

Please rate this