Deepfake, one of the most controversial topics has been gaining popularity in face detection and swapping apps around the globe. In our recent articles, we discussed how Deepfake is revolutionising by creating talking heads videos, swapping the faces of famous celebrities and much more.
Recently, AI-powered photo manipulation and editing app FaceApp has been making waves on the internet. The app is said to be the most advanced neural portrait editing technology which allows its users to make the person on the photo look younger or aged. However, there have been several privacy issues surrounding FaceApp users, for example, FaceApp in iOS appears to be overriding settings if a user has denied access to their camera roll.
After FaceApp, another Deepfake face-swapping application, ZAO created a buzz regarding privacy concerns. ZAO is a new Chinese app which provides a number of features on photography for Android smartphones. According to sources, the app went viral as soon as it was uploaded on the App Stores.
It’s time for a thread about #ZAO, the new Chinese app which blew up since Friday. The app is accessible only to Chinese people for the moment but I managed to get an account 😉
This "AI facial" app allows you to add your face on predefined clip.
— Elliot Alderson (@fs0c131y) September 2, 2019
This app allows users to choose from a number of videos provided by the app to insert their faces into. It also allows a user to replace the photos and images which can be found in photographs and GIFs with their own uploaded images.
How It Works
The developers of this face-swapping app used Generative Adversarial Network (GAN) as the core tech to create the app. After installing and signing up for the app, the user needs to upload portrait images of the face. After that, the user will be able to choose from a number of videos of popular Chinese celebrities as well as Hollywood celebrities such as Leonardo DiCaprio, Marylin Monroe, among others. However, no sooner the app went viral, people started talking about the vulnerabilities that they may face due to the effect of Deepfake.
With more and more data, privacy has become a concern around the globe. The release of this app has created a concern for potential misuse of this Deepfake app among the people. A few days ago, tech giant Apple was charged for letting people listen to commands that users give to its voice assistant Siri as part of our Siri quality evaluation process, also known as grading program. This includes the audio of a user’s request and a computer-generated transcription of it. The researchers at Apple use the audio recording of a request, as well as the transcript, in a machine learning process that “trains” Siri to improve. The tech giant later apologised for listening to the Siri user’s audio without any consent and announced that currently, the Siri grading program has been halted.
Provide your comments below
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box. Contact: email@example.com