Deepfake is a controversial topic from the very beginning. In one of our previous articles, we discussed how Deepfake is being carried out around the globe. It has already started to change a lot of things around including the television and film sector.
Recently, a group of researchers from the Max Planck Institute for Informatics, Stanford University, Princeton University, and Adobe Research created an exceptional algorithm which can make flawless edits on talking-head-videos by changing the speech content.
How The Model Works
This novel model specifically focuses on the face and upper body of a speaker and is based on text-edits and works as transcript-based editing of the talking-head video. When the transcription is edited, the algorithm selects segments from various parts of the video with a similar motion which can be joined to create the newly edited video.
The working of the model is mentioned below:
- Phoneme Alignment: The researchers firstly align the transcript of the speech to a talking-head video at the level of Phonemes (Phonemes are perceptually distinct units that distinguish one word from another in a specific language). This method helps in searching snippets in the video which can be later combined to create new content.
- 3D Face Tracking and Reconstruction: A 3D parametric face model is registered with each frame of the input talking-head video which will later help to selectively blend different aspects of the face.
- Viseme Search: Given an edit operation, the model performs a viseme search (Visemes are the groups of aurally distinct phonemes that appear visually similar to one another) in order to find the best match between the subsequences of the phonemes in the video.
- Parameter Retiming & Blending: The parametric face model is used in order to mix different properties of a face such as a pose, expressions, etc. from different input frames and then blend them together in parameter space.
- Neural Face Rendering: A neural face rendering approach is implied in order to synthesize photo-realistic talking-head video which matches the modified parameter sequence and thus creating a photo-realistic talking-head video frame.
Applications Of The Model
The researchers mainly focused to use this model for video editing and translation in the production of movies, TV shows, commercials, YouTube video logs, and online lectures as a better editing tool.
Currently, the model supports three kinds of edit operations as mentioned below
- Add New Words: In this type, one or more consecutive words can be added at a particular point of a video.
- Rearrange existing words: In this type, the edit works by moving one or more consecutive words that exist in the video.
- Delete existing words: In this type, the edit works by removing one or more consecutive words from the video.
The Other Perspective
As the advancement of technology has huge and immense advantages, however, there are some people who will never stop while utilising it for bad means. The researchers raised important and valid concerns about the probability for misusing the test-based editing approach such as utilising this technology to falsify personal allegations and scandal famous individuals.
One of the researchers from Stanford University stated that every advanced technology will undoubtedly attract people with negative thoughts. For these reasons, the researchers propose some guidelines such as developing forensics, biometrics and other verification methods to diagnose the manipulated videos by the viewers.
Register for our upcoming events:
- WEBINAR: HOW TO BEGIN A CAREER IN DATA SCIENCE | 24th Oct
- Machine Learning Developers Summit 2020: 22-23rd Jan, Bangalore | 30-31st Jan, Hyderabad
Enjoyed this story? Join our Telegram group. And be part of an engaging community.
Our annual ranking of Artificial Intelligence Programs in India for 2019 is out. Check here.
Provide your comments below
What's Your Reaction?
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box. Contact: firstname.lastname@example.org