Now Reading
Unravelling Deep Model Artefacts for Deepfake Videos Detection

Unravelling Deep Model Artefacts for Deepfake Videos Detection

  • This is one of the top voted thesis papers from upGrad's online working professional programs in partnership with one of the UK's leading universities.
Unravelling Deep Model Artefacts for Deepfake Videos Detection

A new Deepfake detection model that effectively identifies fake videos is the need of the hour. Today, social media platforms like Instagram and Facebook use AI (Artificial Intelligence) and ML (Machine Learning) techniques for fake information detection. 

However, these aren’t as effective when it comes to videos. The new model helps resolve this shortcoming, thus, aiding digital platforms to combat the spread of misinformation. 

Register for FREE Workshop on Data Engineering>>

What is a Deepfake?

The term ‘‘deepfake’’ comes from an Artificial Intelligence (AI) technology called ‘‘deep learning.’’ 

Until a few years ago, using deepfakes in videos was impossible. 

However, advancements in AI made it easy to generate them. Coupled with social media, deepfake videos have snagged headlines globally. The motivation behind this research was to figure out a unique way to use ML and AI to detect deepfake videos. Social media platforms could then use the model to discourage sharing fake and unreal content on their platforms, thus, minimizing its harmful impact. 

The SimpleCNN model effectively improved the accuracy of deepfake detection by an existing advanced model. It achieved an accuracy of 86%, which was at least 3% higher than the comparative model – a significant improvement in AI technology for detecting deepfake compressed videos. 

Why do we need SimpleCNN?

The internet has long been a place to disseminate timely information. However, in recent years, it’s responsible for a new disease of sorts – misinformation. Termed an ‘‘infodemic’’ by WHO (World Health Organization) and other internet pundits, misinformation and fake news are a bane when the world is facing a deadly pandemic. 

With the advent of Deepfake in 2017, the use of misinformation to spread confusion took on a new, smarter face. Deepfakes have discredited celebrities, used misleading speeches by politicians to create mass hysteria, cheated organizations of huge monies, and even manipulated terrorist outfits. 

Fake news during the pandemic snowballed into a fatal weapon because of its global scale. So widespread was its effect that the WHO started a webinar series. The series included prominent medical researchers, scientists, health professionals and journalists to educate the public about fake news in health and medicine. In addition, tech and social media companies are rapidly finding and adopting ways to combat misinformation on their platforms. 

Deepfakes now need robust detection technology. Even though researchers have found several methods, they faced challenges with highly compressed (low quality) videos. Developing an efficient model that can identify deepfakes in all content, including compressed videos, is the need of the hour. This project was successful in introducing a more efficient and effective model to combat the infodemic.

Challenges of building a new Deepfake detection model

Data preparation was a significant challenge faced. Before running the Deepfake detection model, it was to be trained. In addition, the new model needed to distinguish between significant and insignificant features in the fake and real videos. 

Furthermore, the model’s primary aim was to extract face-related changes in videos. It would have been more efficient if the faces were extracted from the frame first and then compared. But extracting and aligning faces over frames increased training time. Therefore, this became a part of the data generation process, saving time and effort while training and enhancing the model’s performance. 

However, in an experimental environment, it is easier to attain objectives. Several deepfake generation techniques were applied to examine the model further. This process faced challenges when correctly identifying fakes created using the FaceForensic++ data. The model will need to be continually upgraded with the latest technologies to keep up with deepfake generation in the real world. 

The research used a model that was not extensively used for deepfake video detection before. It meant there was limited past data to compare notes with and improve the model. 

How the Deepfake detection model (SimpleCNN) works

Methodology

An existing deep learning technology called Convolutional Neural Networks (CNN) was previously used to classify and detect tampered images. They have enhanced productivity and scalability in object recognition, leveraging complex theories to identify image patterns. Before CNNs, extracting and identifying objects in images was a time-consuming task. 

As renowned as they are for image identification, CNNs have not seen widespread identification of tampered videos. This research paper presents a new system that uses a multi-layered CNN to identify facial features in a frame and then train the model to categorize the video as real or deepfake. The process is a simple one and can automatically detect tampered videos.

To further test the model’s effectiveness, a video was generated using a technique that was not used previously at the training stage. The purpose was to understand its reliability in the real world. 

See Also

Innovative Aspect of the Research

The key focus was to build the SimpleCNN model that worked for social media platforms. Videos shared on such sites are generally compressed, and they lose details that a high-resolution video would have. It makes identifying main features a problematic task. It is why there has not been enough research to identify deepfake compressed videos. This paper aimed to be one of the first to take a significant leap in that direction by improving existing technology. 

FaceForensic++ is a state-of-the-art data set consisting of 1000 original video sequences used for this research. These video sequences have been manipulated using automated methods. The study used highly compressed videos (low resolution).

Outcomes of the Research

The SimpleCNN model effectively improved the accuracy of deepfake detection by an existing advanced model. It achieved an accuracy of 86%, which was at least 3% higher than the comparative model – a significant improvement in AI technology for detecting deepfake compressed videos. 

SimpleCNN was also compared to other models with published results for compressed videos based on FaceForensic++ data. The table below clearly shows the productivity of the new model and how it outperforms others:

MODELACCURACY
XceptionNet Full Image70.52
Steg. Features+SVM55.98
Cozzolino et al.58.69
Bayar and Stamm66.84
Rahmouni et al.61.18
MesoNet70.47
XceptionNet Full81
SimpleCNN86.33

Thus, the project established a well-trained deep learning model that successfully identifies and extracts fakes from compressed videos, something very few models have done so far. 

Solving real challenges with SimpleCNN

This research successfully developed a deep model artefact – SimpleCNN – for deepfake videos detection. The model can be used in the real world, with regular refinements, to solve real problems and create a positive impact.

Below are notable industrial applications of SimpleCNN:

  • The model can help in several sectors, mainly social media. Existing platforms like Facebook, Instagram, and Twitter already use AI and ML algorithms to run their newsfeeds. By integrating with SimpleCNN, platforms can highlight fake videos more effectively. 
  • The identified deepfake videos can be tagged so that users are alerted that the content they’re watching may not be genuine. 
  • If the confidence level of the model is high, social media platforms can also choose to block the video entirely, nipping the spread of misinformation in the bud. 
  • And not just videos. The model can have extended usage with images. By implementing deepfake detection on a larger scale, social media platforms can become reliable information mediums.

________________________________________

Gagan Tewari is an upGrad learner and as a part of his program, he has developed the thesis report titled, Unravelling Deep Model Artefacts for Deepfake videos Detection.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top