Now Reading
Facebook’s CTO Is Using AI To Wage War With Bad Content

Facebook’s CTO Is Using AI To Wage War With Bad Content

Ram Sagar
W3Schools

“A week before the Christchurch shooting, saying “I wish you were in the mosque” probably doesn’t mean anything. A week after, that might be a terrible thing to say.”

Mike Schroepfer, CTO, FB in IEEE interview

From Cambridge Analytica scandal to Zuckerberg’s awkward senate hearings, Facebook has been at the centre of many controversies in the past couple of years.

Now, FB’s CTO, Mike Schroepfer, is waging war against the harmful content on the platform. In an interview given to IEEE Spectrum, he detailed how he and his team at AI and Integrity department have been working towards achieving their objectives.



Facebook’s AI and Integrity team, headed by Schroepfer, deals with developing technologies to keep people safe on social network platforms. They do so by using algorithmic techniques based on Natural Language Processing, Computer Vision, and Machine Learning — specifically multilingual understanding, misinformation, tampering, entity detection, and semi-supervised learning.

How Algorithms Are Being Put To Use 

Here’s a look at few of the machine learning solutions that FB is implementing to make their platforms safer for healthy conversations:

Deep Entity Classification(DEC)

Deep entity classification (DEC) is a machine learning framework designed to detect abusive accounts. Instead of relying on content alone or handcrafting features for abuse detection in posts, FB uses an algorithm called temporal interaction embeddings (TIEs). This supervised deep learning model captures static features around each interaction source and target, as well as temporal features of the interaction sequence.

Countering Adversarial Images 

This work explored strategies that can be used to defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system. Image transformations such as bit-depth reduction, JPEG compression, total variance minimisation, and image quilting are applied before feeding the image to a convolutional network classifier. This work states that the strength of those defences lies in their non-differentiable nature and their inherent randomness, which makes it difficult for an adversary to circumvent the defences. 

Content Understanding With LASER

via FB

FB employs systems that use machine learning (ML) to scan a given sentence for hateful or bullying content. For this to happen, they need to train ML models with thousands of examples in a given language. Large training data sets are still a challenge. So, FB leverages LASER  (Language-Agnostic SEntence Representations). This open-source toolkit that only needs to be trained in one language, and then applies the model to a range of languages without requiring language-specific training data, and without having to translate them. In short, “zero-shot transfer learning.” 

See Also

Feature Denoising for Improving Adversarial Robustness

This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, the researchers at Facebook developed new network architectures that increase adversarial robustness by performing feature denoising. These networks contain blocks that denoise the features using non-local means or other filters. When combined with adversarial training, the feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings.

Going Forward

Implementing the solutions as mentioned above are not straightforward. They are labour-intensive, require deep domain expertise, and may not capture all the important information about the entity being classified. 

Talking to IEEE, Schroepfer explained that, for instance, state of the art NLP models can end up being computationally intensive and can eat up the whole data centre once deployed. “So we take that state-of-the-art model, and we make it 10 or a hundred or a thousand times more efficient, maybe at the cost of a little bit of accuracy. So it’s not as good as the state-of-the-art version, but it’s something we can actually put into our data centres and run in production,” said Schroepfer.

When it comes to digital space, championing free speech is easier said than done. An allegation or a report need not always be credible and to make sure an algorithm doesn’t take down a harmless post is a tricky thing. Though AI is instrumental in keeping the social media platforms safe, it can still come with risks of bias. Facebook assures its users that they are building best practices for fairness into every step of product development.

Check the full interview here.

What Do You Think?

If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top