MITB Banner

Can Artificial Intelligence Really Flag Fake News? New Research Says No

Fake news can be used for social media campaigns with an aim to disseminate propaganda. The issue has intensified in the last five years as digital journalism has gathered strength. Experts have identified the issue of fake news as a big challenge due to their high distribution and consumption rate on social media. 

Companies like Facebook have hired thousands of employees, just so they can prevent the spreading of fake news on their platforms. Usually, fake news is controlled manually and the veracity of the data is checked from the original source. But many times the scale of fake news can be so big that human workers may not get the time to verify each of them manually.

Automating the detection of fake news has been a challenge for a long time. Current detectors are not yet accurate enough to automate the detection task. However, there are advancements in language modelling that allow fake news to be generated. AI-based text generators like OpenAI’s GPT-2 model can create text-based on human writers which attackers can use to create fake news.

How Attackers Can Still Beat Language Models To Spread Fake News

Research is still being done on how to detect fake news without manual intervention. Detecting fake news by using stylometry-based provenance to track a text’s writing style back to its first source has been accepted as one way to solve the challenge. Earlier, researchers from Harvard University and MIT-IBM Watson Lab had come up with an AI-powered tool to recognise AI-generated text. Known as the Giant Language Model Test Room (GLTR), the system works on finding out if a particular piece of writing was produced by a language model algorithm, aka computer or a human. With AI and natural language generation models being used to make fake news, GLTR can be used to differentiate machine-generated text from human-written text to a non-expert reader.

However, the latest research from MIT led by Tal Schuster says there is a huge problem with GLTR, proving that provenance based system will work only under the assumption that legitimate text is created by humans and fake text is usually generated by machines and never vice-versa. Here, the complication comes from the fact that legitimate text might be auto-generated in a similar process to that of fake text. So, even if a machine can generate factual text in some instances, it is considered fake by the system. In addition, the research shows attackers can also manipulate human-generated text, so it seems computer generate and the other way around too.

Researchers trained AI to use a GPT-2 model to corrupt human-generated text to modify its meaning. More advanced attackers can deploy the generator to create fake content while keeping minimal distributional differences from a genuine source by using probabilities assigned by the language model, so there are minimal edits that modify the authenticity of a news text. According to researchers, safeguards that perform detection of auto-produced text amazingly well can, in any case, be tricked by generator-based fake news.

Bias Found In Current Datasets Used To Fact Check Information

The researchers used the biggest fact-checking database, Fact Extraction, and Verification (FEVER) to develop new detection model. But, the research team found the model developed with FEVER was suffering from errors as it contained human bias. Genuine entries in the database are written as positive statements and false entries as negative statements. For example, phrases like ‘did not’ and ‘yet to’ appear mostly in false statements.

To get around the bias in the FEVER dataset, researchers created new data by debiasing FEVER but found that detection model’s accuracy came down from 86 to 58 per cent. The researchers say there is much more work needed to train AI on non-biased data. This would entail that the model had overly emphasised on the language of the claims made in the text without looking for external proof on the validity of the information.

Overview

The researcher simulated attacks, which showed that provenance approach fails to defend against fake news. Tal Schuster, an MIT student and lead author on the research, said that assessing the veracity of the text rather than solely depending on its style or source is a better way to fight against the fake text. So, it’s important to detect factual falseness of a text rather than determining if it was generated by a machine or a human. The researchers advise to extend datasets and create a benchmark that measures a content’s veracity in multiple human-machine interactions.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Vishal Chawla

Vishal Chawla

Vishal Chawla is a senior tech journalist at Analytics India Magazine and writes about AI, data analytics, cybersecurity, cloud computing, and blockchain. Vishal also hosts AIM's video podcast called Simulated Reality- featuring tech leaders, AI experts, and innovative startups of India.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories