Leave Content Moderation to AI

22

September

2020

5/5 (2)

Leave Content Moderation to AI

When thinking of the hardest jobs in the world, it is likely that ‘content moderator’ will not come to your mind. Nonetheless, with news coming out that a former youtube employee is suing the firm for their inability to protect her mental health and failure to care, this controversial job is at the forefront of debate again. This is far from the first time that a moderator of online content from these social media giants attacks them for developing PTSD as a result of their job. In May, Facebook paid out a 52 million dollar settlement to former employees following a lawsuit.

So, what is a content moderator ? A content moderator is in charge of moderating the content posted by users of the platforms, often after this content has been signaled by other users. As a result, moderators spend their days viewing horrendous content including cannibalism, child pornography, decapitations and so on. It should therefore come without a surprise that without proper psychological support, anyone would develop deep trauma; and this is exactly the case. These tech companies outsource this process to Collabera or Accenture who hire on minimum wage, provide minimal training and support, and make employees sign NDAs as well as documents which acknowledge that there is a risk of developing PTSD (as a way to protect themselves from any liability).

What can be done? It can be argued that processes can be put in place to protect the health of these workers including researching PTSD (to better understand and prevent it), setting a cap on the amount of content viewed or offering constant psychological support. I nonetheless take a different approach, which is to make this a job of the past, and to fully automate the process using AI supported by machine learning. Currently, AI is already used when the content is first uploaded to the platform. Moderators only intervene once users report content. With the recent development of ‘deep neural networks’, systems can now effectively recognise complex data inputs such as images and videos. For a wide range of applications, trained AI systems perform as efficiently as humans, still with some degree of error. As AI continues to develop and is supported by algorithm development and cheaper computational power, there is no doubt that AI systems can become online content moderators. As they are fed with more and more disturbing content, machine learning will train them further and further to continuously reduce their error rate.

Do you think this process should be left to humans?

 

Screen Shot 2020-09-22 at 18.27.20

graph detailing the process of online content moderation and where AI can intervene.

 

Sources:

https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf

https://www.theverge.com/2020/9/22/21450477/youtube-content-moderator-sues-lawsuit-ptsd-graphic-content-exposure 

https://www.theverge.com/interface/2020/1/28/21082642/content-moderator-ptsd-facebook-youtube-accenture-solutions 

https://www.theverge.com/2020/5/12/21255870/facebook-content-moderator-settlement-scola-ptsd-mental-health

Please rate this

2 thoughts on “Leave Content Moderation to AI”

  1. Thanks for the interesting post Basile. I agree with you that AI can be a good way of supporting online content moderators especially when considering the shocking images they often see. But I think there is also danger in heading to AI taking over the moderation of content. Content recommendation systems like Facebook have already establish a closed form of information, you only get recommended where you interested in leading to a filter bubble with confirmation of your already established world views. I argue that the use of AI will only increase this problem.

    The definition of disturbing or not allowed content is established by people itself, within different cultures things can be defined as disturbing differently. I think that the online content moderator’s job also incorporates taking a broad view that overcomes their own personal values and cultural influences. When AI is used to moderate online content, the power of the most interacting users will establish the rules AI follows. When users with a specific cultural background are more prone to report certain posts/images, the AI will be biased to this cultural influence. Monitoring the rules/links AI and neural network defines are already difficult to examine by experienced programmers. Those occurring biases are hard to spot and will slowly bias the content of the platform.

    I therefore argue AI should not be implemented to support online content moderators until the mentioned shortcomings can be fixed.

  2. Very interesting post, Basile! I think that content moderators developing PTSD is a problem that does not get publicity enough. While everyone that uses social media expects their feed to be clear of shocking content, few people actually know the amount of horrible stuff these moderators see on a daily basis. I do agree with you that, in an ideal situation, no human should being should be exposed to such videos. However, AI is currently not nearly as good as human moderators. Youtube decided in March this year to rely heavily on AI for content moderation as employees could not come into the office to moderate the huge number of videos that are put on Youtube. Employees could not work from home as they work in a secure environment. In August Youtube revealed that in the three months prior, it recorded the highest number of videos removed from the platform since its launch in 2005: 11.4 million. Many of these removed videos were wrongfully remove. They attributed this to the higher reliance on AI. Humans are still way better than AI when it comes to making more nuanced decisions. Do you think that maybe a combination of human moderators and AI would be a better option in order to ensure that no content is wrongfully removed?

    some interesting links:
    https://www.inputmag.com/tech/youtubes-ai-moderators-flag-far-more-videos-than-humans-do
    https://www.theverge.com/2020/3/16/21182011/youtube-ai-moderation-coronavirus-video-removal-increase-warning

Leave a Reply

Your email address will not be published. Required fields are marked *