Last week, YouTube decided to bring more human moderators to vet content on the streaming platform. In an interview, YouTube’s Chief Product Officer Neal Mohan said that the lack of human oversight has caused machine-based moderators to take down a whopping 11 million videos, which broke none of the community guidelines. Mohan said, “Even 11m is a very, very small, tiny fraction of the overall videos on YouTube . . . but it was a larger number than in the past.”
It must be noted that YouTube, in a lengthy blog post, in March this year, said that they would be deploying more machine and AI-based moderators for reviewing the content without any human intervention. This was done keeping in view the new work conditions in the pandemic. Twitter too, had announced a similar decision.
So, which is better, human or AI-powered moderation?
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
AI for Content Moderation
Content moderation has become a very important practice for several digital and media platforms, social media websites, and e-commerce marketplaces to drive their growth. It involves removing content that contains irrelevant, obscene, illegal, or inappropriate matters deemed unsuitable for public viewing.
AI is helping this moderation process in optimising it through algorithms to learn from existing data and making review decisions for the content. Majorly, AI-based moderation systems view content in two broad senses — content-based moderation and context-based moderation.
Content-based moderation further includes both text and image/video content moderation. Natural language processing is a preferred technique used for reviewing text content and can also be used for a speech by using speech-to-text techniques. Named entity recognition (NER) is an important NLP technique to recognise harmful text content such as terrorist propaganda, hate speech, harassment, and fake news. Further, sentiment analysis can be adopted for classifying and labelling portions of content in terms of the level of emotion. Computer vision technologies such as object detection and semantic segmentation can enable machines in analysing images to identify harmful objects and their locations. Optical character recognition (OCR) is further used to identify and transcribe text within images and videos.
Context-based moderation is based on ‘reading between the lines’. AI learns from several sources to get the contextual understanding. This form of moderation is still in the process of development.
Which One Bests The Other: AI or Human
Several factors make the case in favour of AI moderators strong:
- Reportedly, every day, at least 2.5 quintillion bytes of data are created. And it is growing with each passing year. AI-powered moderation is a great choice, as compared to human moderators, given the humongous amount of content it can detect and analyse.
- Further, as opposed to major trauma and mental health issues such as post-traumatic stress disorder that human moderators may face as being exposed to hours of agonising content can easily be sorted by deploying AI models. Case in point ex-YouTube on-contract moderator who sued the company for pushing her to extreme mental trauma as a direct result of watching hours of harmful content.
- The cost of human moderation is quite high. It goes without saying that moderation does not really generate revenue for the company and is merely seen as a necessary evil that must be controlled, vetted to diminish toxic content under all costs so as to not drive the users away.
Having said that, as in the case of YouTube, AI cannot mimic human capabilities and sensitivities in deciding which content is harmful and to what extent. AI is best at automating processes with straightforward datasets and defined characteristics. However, they fall short when it comes to more nuanced and subjective decision making.
Human rights and free speech experts around the world are against fully automated content moderation as its over-bluntness is bound to erroneously infringe the right to create and circulate critical information. For example, in May 2020, YouTube admitted that its enforcement systems ‘mistakenly’ deleted comments critical of the Chinese Communist Party (CCP)
Decision-making in moderation is a complex process, and for now, a hybrid human-AI moderation system is what looks like the best option.