MITB Banner

These 7 Entrepreneurs Are Using NLP To Counter Fake News Problem In India

Team MetaFact: (From left) Prateek Singh, Praveen Anasurya and Sagar Kaul

In India especially there have been several scenarios of fake news, which has led to internal strife. But there have also been attempts underway to solve the problem through AI. One such notable effort has been taken by a Delhi-based startup called MetaFact, that  is on a mission to validate the news content using AI.

MetaFact Tackles The Fake News Menace

MetaFact is a startup staffed with 7 members — three with a journalism background, three from the tech background and one a researcher, whose aim is to counter the popularisation of fake news across the internet.

“It was September 2016 when I received a message from Praveen and Prateek asking if we could have a talk about ways to work on UGC. During this first meeting, we all agreed that validation was a crucially important part of providing better UGC to newsrooms.”, says Sagar Kaul, one of the founding members of MetaFact who has an editorial background.

On the editorial side is Soumadip Dey, Pradarshi and Sagar Kaul while the engineering team includes Praveen Anasuya and Prateek Singh and Minho Ryu from Seoul and Samir Krishnamurty is from the research background. Their brainstorming session took a final shape in 2017 in the form of MetaFact, when they were able to create their first blueprint design of the tool.

MetaFact Workflow

MetaFact is a tool that uses Natural Language Processing (NLP) to understand the context of the news articles, blog posts, and social media posts, and thereby performs cognitive operations which include bucketing, indexing, and trust scoring, providing intelligent access to data.

 

  • It provides functions aimed at filtering out ‘Claims’ type sentences from a sea of content across the web, which have interrogative, declarative, and other kinds of sentence structures used for investigative journalists, enabling them to investigate and/or debunk claims at scale.

 

  • From the filtered out claims with respective trust scores, social virality index is provided to journalists to start working on debunking claims with low trust scores. Concept highlights, semantic analysis, and extractive summarization are all provided to help journalists deep dive their search, depending on what they are looking for.

 

  • The output of the aforementioned search is generated in an enriched E-R visual graph, which is easily exportable and covers angles that one might miss in a purely text-based interface. Exportability and compatibility of all graphs generated from the tool are given more importance, and has been set to an easily portable .svg type.

How NLP underpins the monitoring platform

Based on the input from different datasets  such as location, entities are created and then subject to check the output. Refining the output they let their tool to improvise further. They constantly let the scenario change so that the NLP tool has a new data set being fed to it and learn from.

The index objects of the sets: newly minted sources vs established ones get updated by finding similar context stories through commoncrawl, sns streams etc. This provides scale to source monitor.


Detecting fake information on Facebook and Twitter is way difficult than on platforms like WhatsApp and Telegram because of their end-to-end encryption. MetaFact solves this problem by building a community called Metafixers. “By building these communities and keeping them engaged, we develop a bulwark against the flood of false information that dark social propagates. Our current strategies utilize media literacy programs, collaborations, and awareness creation”, says Sagar.

Identifying Fake Videos With Deepfakes

To solve the problem of fake videos, MetaFact has developed a tool called Deepfakes. “The basic approach to identifying deepfakes is to detect artifacts generated when pictures are manipulated. With this method, we can use a two-phase algorithm: extract facial area and then detect artifacts on it.”  This technology has not been deployed so far and is still under work.

They also have another approach using eye blinking and was introduced at theconversation.com. Since there are more pictures of faces with eyes open than eyes closed, while training the deep fake model, it’s supposed to generate more frames with eye open; in other words, blink less than usual. With this principle ML model can be trained to detect eye blinking and compare how often it blinks. Sagar says that the difficulties in fact-checking stories are fairly similar to a standard piece of investigative journalism, when talking about the challenges that MetaFact has faced.

Some of the recent use cases are:

1.BBC linked poll predicting BJP win:  A WhatsApp message/Facebook status, contained a purported link to a BBC survey claiming that the Bharatiya Janata Party (BJP) will win 135 seats in the upcoming Karnataka assembly elections on the 12th of May. The message had a wrong mathematical figure was evidently incorrect. It mentioned a total of 234 seats when Karnataka has 224 constituencies. Over that, the BBC link in that message didn’t lead to a true BBC website and BBC later even posted that it is a fake survey and not by them.

2.Baramulla’s residents celebrating Pakistan’s independence day: A video on social media claimed that the residents of Baramulla in Kashmir celebrated Pakistan’s independence day this year. However, the video belonged to 2016 and was of a pro-Pakistan bike rally.

MetaFact Chatbot

The MetaFact chatbot is an integral part of MetaFact’s media literacy and Metafixer strategy. The chatbot will also be trained to have a conversation with the user to answer simple questions. It will align with the AI tool so that if a user receives any sort of rumour based or questionable message on dark social platforms like WhatsApp, Telegram etc they can forward it to our chatbot, and if the said rumour/claim has already been fact checked, it will display that result in the context of the specific entities mentioned in the message.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Disha Misal

Disha Misal

Found a way to Data Science and AI though her fascination for Technology. Likes to read, watch football and has an enourmous amount affection for Astrophysics.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories