MITB Banner

Understanding The Technology Behind Content Moderation System Of Tech Giants

Facebook’s AI-based content moderation algorithm removed over 33 million pieces of content between June 16 and July 31 in India.
Content moderation

Facebook has often been on the news for content moderation — political, racial and ethical. Recently, the tech giant revealed in its monthly compliance report that its artificial intelligence-based content moderation algorithm had removed over 33 million content pieces between June 16 and July 31 this year, in India itself. In addition, Facebook removed another 2.6 million pieces of content from its photo and sharing social media platform Instagram.

Facebook posts these reports since implementing the country’s new Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. 

What’s with the content? 

Facebook said that the largest share of these takedowns was spam and that the platform’s AI algorithms were able to take down 99.9 per cent of such problematic pieces of content over the said period. In addition, 2.6 million of these content pieces concerned nudity, and another 3.5 million accounted for sexual and violent or graphic activities. 

On the other hand, Instagram was able to detect 64.6 per cent of content related to bullying and harassment, compared to Facebook’s 42.3 per cent. However, this isn’t the algorithm’s performance to the best of its ability. 

Leveraging AI for Content Moderation

Like Instagram and its parent company Facebook, social media giant Twitter uses machine learning for content moderation. Artificial intelligence helps these tech giants scale the work of human experts. Facebook’s ML algorithm performs the following tasks to take action before a post, or a comment harms people: 

  • Reducing the distribution of problematic posts or content
  • Warnings and more context to content rated by third-party fact-checkers 
  • Removing misinformation

Last year, Facebook AI announced that it had deployed image matching model SimSearchNet++, an upgraded version of SimSearchNet. The model is trained using self-supervised learning to match variations of the same image with precision. Facebook claims that SimSearch++ improves recall while maintaining accuracy, enhancing its ability to find true instances of misinformation while triggering few false positives. Apparently, it is more effective at grouping collages of misinformation. The algorithm runs on both Facebook and Instagram.

Source: Facebook AI

Facebook also introduced AI systems to detect new variations of harmful content automatically. These systems rely on technologies including ObjectDNA, which focuses on key objects within an image while ignoring background clutter. The AI model also leverages LASER cross-language sentence-level embedding developed by Facebook AI researchers. 

Additionally, Facebook collaborates with industry leaders and academic experts to organise an open initiative — Deepfake Detection Challenge (DFDC) to develop new tools to address the challenges of deepfake. 

Despite these measures, a report by NYU Stern, Facebook continues to make 300,000 content moderation mistakes every day. 

The Price paid by Moderators 

Facebook continues to make headlines for its sneaky acts. Most recently, it was fined about $270 million by Irish authorities. The authorities charged Facebook for not being transparent about the collection of data from people. 

Content moderation is at the heart of Facebook’s business model. It is, therefore, imperative for the company to make sure that its moderators are kept content. In July this year, content moderators wrote an open letter to Facebook demanding change — fair treatment, a safe workspace, and mental health support. T0 add to it, a recent article by the New York Times revealed how insiders at Accenture — Facebook’s largest content moderator, have been questioning the ethics of working for the company. In shifts that last eight hours, thousands of Accenture employees sort through Facebook’s problematic content, including messages and videos about suicide and sexual acts, making sure to stop them from spreading online.
Provided powerful algorithms are in place to ‘scale up’ the task of human experts; Facebook wouldn’t be facing the nightmare of constant complaints from moderators. Despite his promises to clean up the social media platform, Mark Zuckerberg hires third-party consulting and staffing firms to remove harmful content that AI cannot. Starting 2012, reportedly, Facebook has hired a minimum of 10 consulting and staffing firms worldwide for content moderation. Interestingly, it pays Accenture $500 million every year to avail its services. Unless the models get better trained to identify the wrong from the right, harmful content and misinformation will continue to spread through social media platforms.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Debolina Biswas

Debolina Biswas

After diving deep into the Indian startup ecosystem, Debolina is now a Technology Journalist. When not writing, she is found reading or playing with paint brushes and palette knives. She can be reached at debolina.biswas@analyticsindiamag.com

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories