Earlier this year, a group of academics at the University of Washington found that many satellite images from systems like Google Earth were vulnerable to deepfake spoofing. These researchers built a general adversarial network (GAN) that generated more than 8,000 satellite images of Tacoma, Seattle and Beijing, most of them being deepfake.
Deepfake has somewhat been a controversial element of technology for some time now. On the one hand, it allows short animations of Queen Elizabeth II dancing or saying something funny. But, on the other hand, it can circulate fake satellite images that could potentially spread false information and instigate massive political instability.
Given these controversies, researchers have been trying to find solutions that can help identify deepfake. Recently, Facebook joined the bandwagon.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Facebook’s AI wing has collaborated with researchers from Michigan State University (MSU) to create a way to reverse engineer deepfake. It entails analysing AI-generated images to reveal underlying characteristics of the machine learning model that made it.
Previous studies in deepfake identification have focused on identifying whether an image is actual through deepfake detection; or seeing whether an image was generated by a model seen during training through Image Attribution. The challenge here is that some deepfake come from known models, while others do not. With rapidly developing models spewing out deepfake, this issue is massive.
Facebook’s initiative, led by MSUs Vishal Asnani, attempts to solve this problem by looking at the unique traits behind the AI model used to create a deepfake image. The method begins with the use of image attribution and then goes on to uncover the properties of the model used to create the image. These properties, known as hyperparameters, have to be tuned into each machine learning model. As a whole, they leave a unique fingerprint on the image that people can use to tell its source.
This method is often used in digital photography and forensic science to identify the digital camera model used to click a photograph, using device fingerprints. Similarly, individuals can use fingerprints left on images from a generative model to determine the generative model in question. Telling the image’s source will also allow investigators to trace illegal uses of deepfake.
How does this work?
Source: Facebook AI
The model first runs a deepfake image through a Fingerprint Estimation Network (FEN) to predict details about the fingerprint it leaves. Following this, the fingerprints can be used as inputs for model parsing. Finally, model parsing uses these predicted generative model fingerprints to estimate a model’s hyperparameters.
Through the model parsing method, individuals could estimate the network architecture of the model used to generate the deepfake and the training loss function. Thus, each generative model would have different network architectures and training loss functions—allowing the method to derive critical insights into the model used to generate the deepfake image.
MSU collected a fake image dataset from 100,000 synthetic images created from 100 open-sourced generative models to test this model. Some of these images were real and available on open source projects, while others were synthetically created using their code.
Upon testing the FEN for deepfake detection and image attribution, the model displayed state-of-the-art results. Testing for model parsing is a little trickier because of its novelty. Due to this, the researchers created their benchmark called random ground-truth to test their work by randomly shuffling each hyperparameter in the ground-truth set. Against this benchmark, their approach performed significantly better, proving a solid and generalised correlation between generated images, their architecture hyperparameters and loss function types.
In 2020, Facebook held a deepfake detection competition where the winning algorithm could only detect AI-manipulated videos 65.18 per cent of the time. Catching these generated images is difficult and making matters worse is how increasingly believable deepfake has become. It has also garnered fear around the tech in many spheres. Earlier this year, a group of European Members of Parliament fell victim to what they thought was a deepfake attack—but ended up being a prank that did not use generative AI at all.
Thus, this initiative to build a model that can operate with unknown generative models is not only brilliant but will calm the confusion and fear surrounding deepfake down. As of now, work is still in its research stages and cannot be used—but here’s hoping that tools like this pave the way towards a better and safer future.