Every advancement in technology has the good and the bad side. In one of our earlier articles, we had discussed how DeepFakes software created a controversial stir where miscreants rampantly created fake pornography videos by swapping faces of celebrities, not to mention false propaganda of politicians again through videos. It clearly showed how bad a technological advancement can go in an online world.
With weird end results, DeepFakes is in itself weird an algorithm, where the face-swapping phenomenon is powered by deep learning, one of the most mysterious and fascinating sub-fields of machine learning. This article explores the eccentricity that lies in DeepFakes.
Eerie Side Of Face Swapping
What’s astonishing with DeepFakes is that it can be done with just a few lines of code, which are available on GitHub. All one has to do is collect and train image data from videos for face swapping, and then run the code on that data. Even though the training time is high for DeepFakes, it can correctly swap faces. Hence, it might even be preferred than the conventional video editing software.
There’s also an application called FakeApp, that runs DeepFakes algorithm. This app uses Google’s Tensorflow and, actually works on image recognition tasks mostly by experimenting with images. FakeApp’s creator is believed to be an anonymous software developer in the US, and he cites that it was developed for creative purposes but was “misused”.
Stranger Than Fiction
Although DeepFakes presents mixed opinions about this technology, its inner functionality is what makes it stranger. It has autoencoders (a type of neural network) that refines image data. These encoder networks are all it takes to innovatively morph videos or images perfectly. If data is collated, then it only takes seconds to get the process ready without breaking a sweat.
Coming to the actual idea, DeepFakes algorithms usually have two autoencoders trained in parallel. These link the encoders and decoders in the network. While encoders perform dimensionality reductions for input, decoders work on variables obtained through reduction to produce output closely matching the input requirement.
Based on this, DeepFakes autoencoders are trained to reconstruct the faces of two different persons. Say, for example, let's consider two fictional persons Adam and Ben. In the first autoencoder, Adam’s images are trained. The second autoencoder trains on Ben’s images. After both these elements are optimised for recognition, the trick lies in using a shared encoder (weights from both the two autoencoder systems) so that any image of Adam would be encoded only to be decoded to Ben’s image.
However, not all DeepFakes architectures are the same. It depends on the output, training time and other factors in the project.
What is stranger here is that the images can be seamlessly morphed regardless of their features. Auto-encoders can literally take any facial image and convert it into the desired face required. Suppose, if Adam’s face is replaced with someone else as input, the system slowly learns and morphs the face into Ben’s face no matter what. Even though this may take time, DeepFakes will eventually perform better and get close to the performance of Adam-to-Ben face reconstruction.
Setbacks With DeepFakes
As you see above, it is autoencoders that do the job of face swapping from video frames. But, these neural networks compress image data resulting in quality loss, therefore, leading to blurry images. Also, if the face is not oriented towards the video equipment and is in a different direction, face swapping is almost impossible.
Other factors include higher computing cost and training data. DeepFakes requires a lot of data for training as well as a reasonable amount of computing power to run it. In spite of the fact that it is fairly expensive, DeepFakes can be conducted and it leaves a room full of conflicting possibilities.
Despite all of this, a considerable amount of research is going on to halt the dangers of DeepFakes elements spreading in the Internet space. Nonetheless, using it for non-malicious purposes such as in advertising and movie production can give out creative wonders.
Register for our upcoming events:
- Meetup: NVIDIA RAPIDS GPU-Accelerated Data Analytics & Machine Learning Workshop, 18th Oct, Bangalore
- Join the Grand Finale of Intel Python HackFury2: 21st Oct, Bangalore
- Machine Learning Developers Summit 2020: 22-23rd Jan, Bangalore | 30-31st Jan, Hyderabad
Enjoyed this story? Join our Telegram group. And be part of an engaging community.
Provide your comments below
What's Your Reaction?
I research and cover latest happenings in data science. My fervent interests are in latest technology and humor/comedy (an odd combination!). When I'm not busy reading on these subjects, you'll find me watching movies or playing badminton.