Online Content Moderation: To AI or Not

AI Moderators

Last week, YouTube decided to bring more human moderators to vet content on the streaming platform. In an interview, YouTube’s Chief Product Officer Neal Mohan said that the lack of human oversight has caused machine-based moderators to take down a whopping 11 million videos, which broke none of the community guidelines. Mohan said, “Even 11m is a very, very small, tiny fraction of the overall videos on YouTube . . . but it was a larger number than in the past.” 

It must be noted that YouTube, in a lengthy blog post, in March this year, said that they would be deploying more machine and AI-based moderators for reviewing the content without any human intervention. This was done keeping in view the new work conditions in the pandemic. Twitter too, had announced a similar decision. 


Sign up for your weekly dose of what's up in emerging technology.

So, which is better, human or AI-powered moderation?

AI for Content Moderation

Content moderation has become a very important practice for several digital and media platforms, social media websites, and e-commerce marketplaces to drive their growth. It involves removing content that contains irrelevant, obscene, illegal, or inappropriate matters deemed unsuitable for public viewing.

AI is helping this moderation process in optimising it through algorithms to learn from existing data and making review decisions for the content. Majorly, AI-based moderation systems view content in two broad senses — content-based moderation and context-based moderation.  

Content-based moderation further includes both text and image/video content moderation. Natural language processing is a preferred technique used for reviewing text content and can also be used for a speech by using speech-to-text techniques. Named entity recognition (NER) is an important NLP technique to recognise harmful text content such as terrorist propaganda, hate speech, harassment, and fake news. Further, sentiment analysis can be adopted for classifying and labelling portions of content in terms of the level of emotion. Computer vision technologies such as object detection and semantic segmentation can enable machines in analysing images to identify harmful objects and their locations. Optical character recognition (OCR) is further used to identify and transcribe text within images and videos.

Context-based moderation is based on ‘reading between the lines’. AI learns from several sources to get the contextual understanding. This form of moderation is still in the process of development.

Which One Bests The Other: AI or Human 

Several factors make the case in favour of AI moderators strong:

  • Reportedly, every day, at least 2.5 quintillion bytes of data are created. And it is growing with each passing year. AI-powered moderation is a great choice, as compared to human moderators, given the humongous amount of content it can detect and analyse. 
  • Further, as opposed to major trauma and mental health issues such as post-traumatic stress disorder that human moderators may face as being exposed to hours of agonising content can easily be sorted by deploying AI models. Case in point ex-YouTube on-contract moderator who sued the company for pushing her to extreme mental trauma as a direct result of watching hours of harmful content. 
  • The cost of human moderation is quite high. It goes without saying that moderation does not really generate revenue for the company and is merely seen as a necessary evil that must be controlled, vetted to diminish toxic content under all costs so as to not drive the users away.

Having said that, as in the case of YouTube, AI cannot mimic human capabilities and sensitivities in deciding which content is harmful and to what extent. AI is best at automating processes with straightforward datasets and defined characteristics. However, they fall short when it comes to more nuanced and subjective decision making.  

Human rights and free speech experts around the world are against fully automated content moderation as its over-bluntness is bound to erroneously infringe the right to create and circulate critical information. For example, in May 2020, YouTube admitted that its enforcement systems ‘mistakenly’ deleted comments critical of the Chinese Communist Party (CCP)

Decision-making in moderation is a complex process, and for now, a hybrid human-AI moderation system is what looks like the best option.

More Great AIM Stories

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at

Our Upcoming Events

Masterclass, Virtual
How to achieve real-time AI inference on your CPU
7th Jul

Masterclass, Virtual
How to power applications for the data-driven economy
20th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, Virtual
Deep Learning DevCon 2022
29th Oct

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM

What can SEBI learn from casinos?

It is said that casino AI technology comes with superior risk management systems compared to traditional data analytics that regulators are currently using.

Will Tesla Make (it) in India?

Tesla has struggled with optimising their production because Musk has been intent on manufacturing all the car’s parts independent of other suppliers since 2017.

Now Reliance wants to conquer the AI space

Many believe that Reliance is aggressively scouting for AI and NLP companies in the digital space in a bid to create an Indian equivalent of FAANG – Facebook, Apple, Amazon, Netflix, and Google.