Leave Content Moderation to AI
When thinking of the hardest jobs in the world, it is likely that ‘content moderator’ will not come to your mind. Nonetheless, with news coming out that a former youtube employee is suing the firm for their inability to protect her mental health and failure to care, this controversial job is at the forefront of debate again. This is far from the first time that a moderator of online content from these social media giants attacks them for developing PTSD as a result of their job. In May, Facebook paid out a 52 million dollar settlement to former employees following a lawsuit.
So, what is a content moderator ? A content moderator is in charge of moderating the content posted by users of the platforms, often after this content has been signaled by other users. As a result, moderators spend their days viewing horrendous content including cannibalism, child pornography, decapitations and so on. It should therefore come without a surprise that without proper psychological support, anyone would develop deep trauma; and this is exactly the case. These tech companies outsource this process to Collabera or Accenture who hire on minimum wage, provide minimal training and support, and make employees sign NDAs as well as documents which acknowledge that there is a risk of developing PTSD (as a way to protect themselves from any liability).
What can be done? It can be argued that processes can be put in place to protect the health of these workers including researching PTSD (to better understand and prevent it), setting a cap on the amount of content viewed or offering constant psychological support. I nonetheless take a different approach, which is to make this a job of the past, and to fully automate the process using AI supported by machine learning. Currently, AI is already used when the content is first uploaded to the platform. Moderators only intervene once users report content. With the recent development of ‘deep neural networks’, systems can now effectively recognise complex data inputs such as images and videos. For a wide range of applications, trained AI systems perform as efficiently as humans, still with some degree of error. As AI continues to develop and is supported by algorithm development and cheaper computational power, there is no doubt that AI systems can become online content moderators. As they are fed with more and more disturbing content, machine learning will train them further and further to continuously reduce their error rate.
Do you think this process should be left to humans?
graph detailing the process of online content moderation and where AI can intervene.
Sources: