AIM logo Black

Are DeepFakes Flawed?

Share

As the whole world was welcoming a new year on the last day of 2018, the President of Gabon, too, like many world leaders, delivered a speech wishing his citizens along with some contextual message. This celebratory speech, however, led to a coup d’etat by Gabon’s military.

This was because someone thought that the video was a DeepFake!

This awkwardly shot video somehow created a huge suspicion that the President is actually either ill or dead and this deep fake video was created as a coverup. A mere suspicion spread like a wildfire and resulted in a coup, the first in Gabon’s since 1964.

Though this military takeover attempt was a failure and the video was certified to be undoctored by the forensic experts, however, one can’t help but think that the influence of fake videos on the stability of a society.

Every innovation in AI has been questioned for its ill-effects. The evolution of GANs image generation or GPT for text generation can be used for severe devastation which is only bounded by the creativity of their users. And, therefore new innovative tools are being developed, datasets are being open-sourced, and competitions are being held to avoid such a ‘DeepFake’ disaster.

But is it possible to create a “universal” detector to spot the fake generated images?

In this work released by researchers at UC Berkeley in collaboration with Adobe, they tried to explore the same.

To test this, they have collected a dataset consisting of fake images generated by 11 different CNN-based image generator models — ProGAN, StyleGAN, BigGAN, CycleGAN, StarGAN, GauGAN, DeepFakes, and cascaded refinement networks.

Quantifying Fakeness

via paper by Sheng-Yu Wang et al.,

Detecting whether an image was generated by a specific synthesis technique, say the authors,  would require one to train a classifier on a dataset consisting of real images, and images which are synthesized by the technique in question.

However, such an approach might fail when fed with new data due to the underlying bias in the dataset. Even worse will be if the technique-specific detector is likely to soon become ineffective as generation methods evolve and the technique it was trained on becomes obsolete. 

So, in order to find a unified solution to all the state of the art fake image generators, it is necessary to find a common vulnerable spot. One common thing that connects all popular models is the involvement of convolutional layers (CNNs). 

Therefore, it is natural to ask whether today’s CNN generated images contain common artefacts, such as some kind of detectable CNN fingerprints, that would allow a classifier to generalize to an entire family of generation methods, rather than a single one. 

To evaluate the model, a new dataset of CNN generated images — ForenSynths dataset, was created consisting of synthesized images from 11 models, that range from unconditional image generation methods.

Model’s performance on each dataset is then checked using average precision (AP) since it is a thresholdless ranking-based score which is not sensitive to the base rate of the real and fake images in the dataset.

Since the correlation is not observed in other datasets, observe the authors, it is more likely that the model learns features more towards low-level CNN artefacts.

In one of the results, the researchers also found out that there is a weak correlation in the BigGAN and StarGAN datasets.  And, as the “fakeness” scores are higher, the images tend to contain more visible artefacts, which deteriorate the visual quality. 

Despite a successful demonstration of how the generated images can be checked through metrics, the authors of this paper, however, admit that detecting fake images is just one small piece of the puzzle of combating the threat of visual disinformation, and it is hoped that their work would lead to effective solutions that would incorporate a wide range of strategies, from technical to social to legal.

Key Takeaways

The authors of this work will try to demonstrate how CNN based GANs are still being imperfect and how a few metrics can be used to identify the fake images. Here are a few highlights from the paper:

  • Today’s CNN-generated fake images are still detectable.
  • This allows forensic classifiers to generalize from one model to another without extensive adaptation. 
  • The generator network never wins against the discriminator due to the difficulties in achieving Nash equilibria.
  • Any change in the above result can lead to a situation when synthetic images are completely indistinguishable from real ones.

Generative Adversarial Networks (GANs), undoubtedly, have generated a huge amount of public interest and concern, as the issue has already started to play a significant role in global politics and if not tackled, Deepfakes could soon become a tool for proliferating misinformation and oppression. 

Know more about this work here.

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India