Now Reading
Facebook & Its Tumultuous Relationship With AI-Based Content Moderation

Facebook & Its Tumultuous Relationship With AI-Based Content Moderation

Shraddha Goled
Facebook AI Moderation

Download our Mobile App


During a press meet recently, a Facebook spokesperson said that the social media giant would be redoubling its efforts to counter ‘harmful content’ on its platform using artificial intelligence. Reportedly, Ryan Barnes, the Facebook Product Manager of Community Integrity, said that the company would use AI to prioritise harmful content. This move is targeting at helping its over 15,000 human reviewers and moderators in dealing with reported contents.

Barnes said during the press interaction, “We want to make sure we’re getting to the worst of the worst, prioritising real-world imminent harm above all.” 



With that being said, there have been numerous attempts in the past to bring AI into the content moderation process on Facebook’s platforms. However, not all of them have met with success. We track down some of the major efforts of Facebook in the past and how it has fared in tackling the issue.

Facebook’s Efforts Towards AI-Based Moderation

In the past, Facebook has used an XML method which uses a single shared encoder to train massive amounts of multilingual data. This provided an improvement over both supervised and unsupervised machine translation of low-resource languages for better detection of hate speech and harmful contents even in languages other than English. This system enabled the quality of classifiers to apply training in one language, for most cases, English, to be applied across other languages. This method was able to proactively detect harmful language and content in about 40 languages.

This method soon succeeded by Whole Post Integrity Embeddings (WPIE). WPIE is a pre-trained representation of content for integrity problems. As compared to previous systems, the WPIE method was trained on a larger set of violations and training data. While introducing this method, Facebook said in its blog that the method “improves performance across modalities by using focal loss, which prevents easy-to-classify examples from overwhelming the detector during training, along with gradient blending, which computes an optimal blend of modalities based on their overfitting behaviour.”

Facebook claimed that upon deployment, these tools have helped in substantially improving the performance of their integrity tools. For example, this tool was able to help in detecting almost 97.6% of 4.4 million drug sale content hosted on the platform in 2019.

Earlier this year, as the COVID-19 situation was looming large, Facebook started utilising SimSearchNet, a convolutional neural net-based model, built originally to detect near-exact duplicates, for fighting misinformation. The company said that SimSearchNet was helping in end-to-end image indexing to recognise and flag near-duplicate matches.


Stay Connected

Get the latest updates and relevant offers by sharing your email.

Just recently, Facebook introduced its machine translation model called M2M-100, which has been trained on 2,200 languages — about ten times the amount of training data used in the preceding model. As per the company, this model was built as a many-to-many data set with 7.5 billion sentences for 100 languages using novel mining techniques. The resulting parameters capture information from related languages and reflect a diverse script of languages and morphology. One of the salient features of this technique was found to be the fact that it did not require English as a link between two languages. Meaning, a language can be translated to another without having to be first translated into English. The ultimate goal of this model is to perform bidirectional translation between 7,000 languages to particularly benefit low-resource languages. Apart from its obvious application in communication, Facebook anticipated that the M2M-100 model would help in content moderation across a larger set of languages.

How Successful Are These Moderation Techniques?

In ways, more than one, 2020, with its worldwide pandemic situation and the much anticipated US Presidential Elections, the AI content moderation system of Facebook had to go through a litmus test. While there were a few hits, a lot of loopholes were left exposed.

Speaking in the context of the recently concluded US elections, CEOs of Facebook and Twitter were apprehended to appear before the Senate Judiciary Committee this week on its failure to take down harmful, oft inflammatory content from the platform and also on the alleged bias.

See Also
AI Moderators

In addition to this, the voice around the health and safety of human moderators at Facebook has grown only stronger. The moderators accused the company of flouting conducive working environment practices by underpaying them and forcing them to return to work even at the height of the pandemic. It is to be noted that just in the beginning of this year, in lieu of Coronavirus, thousands of these content moderators were sent back home, and all the content moderation activities were primarily governed by the AI systems, albeit, with limited success.

Apart from these specific examples, there have been repeated criticisms of Facebook’s way of handling harmful content. It has been found in more than one occasion that the platform pulled down seemingly harmless content while allowing dangerous contents to thrive.

Wrapping Up

Despite the new advancements that Facebook is regularly announcing on its AI-enabled content moderation, the fact remains that it is still very much dependent on the human moderators. The problem with this premise is that these human moderators, due to being exposed to hours of triggering content on a daily basis, affects their mental well-being. Additionally, they also complain of being overworked and underpaid.

Viewing all these issues, it is safe to say that it will be a while before Facebook presents a truly breakthrough AI model for countering harmful, triggering, and biased content on its platform.

What Do You Think?

If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top