As the whole world was welcoming a new year on the last day of 2018, the President of Gabon, too, like many world leaders, delivered a speech wishing his citizens along with some contextual message. This celebratory speech, however, led to a coup d’etat by Gabon’s military.
This was because someone thought that the video was a DeepFake!
This awkwardly shot video somehow created a huge suspicion that the President is actually either ill or dead and this deep fake video was created as a coverup. A mere suspicion spread like a wildfire and resulted in a coup, the first in Gabon’s since 1964.
Though this military takeover attempt was a failure and the video was certified to be undoctored by the forensic experts, however, one can’t help but think that the influence of fake videos on the stability of a society.
Every innovation in AI has been questioned for its ill-effects. The evolution of GANs image generation or GPT for text generation can be used for severe devastation which is only bounded by the creativity of their users. And, therefore new innovative tools are being developed, datasets are being open-sourced, and competitions are being held to avoid such a ‘DeepFake’ disaster.
But is it possible to create a “universal” detector to spot the fake generated images?
In this work released by researchers at UC Berkeley in collaboration with Adobe, they tried to explore the same.
To test this, they have collected a dataset consisting of fake images generated by 11 different CNN-based image generator models — ProGAN, StyleGAN, BigGAN, CycleGAN, StarGAN, GauGAN, DeepFakes, and cascaded refinement networks.
Detecting whether an image was generated by a specific synthesis technique, say the authors, would require one to train a classifier on a dataset consisting of real images, and images which are synthesized by the technique in question.
However, such an approach might fail when fed with new data due to the underlying bias in the dataset. Even worse will be if the technique-specific detector is likely to soon become ineffective as generation methods evolve and the technique it was trained on becomes obsolete.
So, in order to find a unified solution to all the state of the art fake image generators, it is necessary to find a common vulnerable spot. One common thing that connects all popular models is the involvement of convolutional layers (CNNs).
Therefore, it is natural to ask whether today’s CNN generated images contain common artefacts, such as some kind of detectable CNN fingerprints, that would allow a classifier to generalize to an entire family of generation methods, rather than a single one.
To evaluate the model, a new dataset of CNN generated images — ForenSynths dataset, was created consisting of synthesized images from 11 models, that range from unconditional image generation methods.
Model’s performance on each dataset is then checked using average precision (AP) since it is a thresholdless ranking-based score which is not sensitive to the base rate of the real and fake images in the dataset.
Since the correlation is not observed in other datasets, observe the authors, it is more likely that the model learns features more towards low-level CNN artefacts.
In one of the results, the researchers also found out that there is a weak correlation in the BigGAN and StarGAN datasets. And, as the “fakeness” scores are higher, the images tend to contain more visible artefacts, which deteriorate the visual quality.
Despite a successful demonstration of how the generated images can be checked through metrics, the authors of this paper, however, admit that detecting fake images is just one small piece of the puzzle of combating the threat of visual disinformation, and it is hoped that their work would lead to effective solutions that would incorporate a wide range of strategies, from technical to social to legal.
The authors of this work will try to demonstrate how CNN based GANs are still being imperfect and how a few metrics can be used to identify the fake images. Here are a few highlights from the paper:
- Today’s CNN-generated fake images are still detectable.
- This allows forensic classifiers to generalize from one model to another without extensive adaptation.
- The generator network never wins against the discriminator due to the difficulties in achieving Nash equilibria.
- Any change in the above result can lead to a situation when synthetic images are completely indistinguishable from real ones.
Generative Adversarial Networks (GANs), undoubtedly, have generated a huge amount of public interest and concern, as the issue has already started to play a significant role in global politics and if not tackled, Deepfakes could soon become a tool for proliferating misinformation and oppression.
Know more about this work here.
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
I have a master's degree in Robotics and I write about machine learning advancements. email:firstname.lastname@example.org