Active Hackathon

Indian-Origin Scientists Develop New AI System To Stop Deepfake Videos

amit roy Chowdhury
amit roy Chowdhury
Dr Amit Roy Chowdhury, professor of electrical and computer engineering, University of California, Riverside. (Image source: UCR)

With advanced image journaling tools, one can now easily alter the semantic meaning of images by using manipulation techniques like copy clone, object splicing/removal, which can mislead the viewers. One of the gravest and notorious examples of this sort of tampering is deepfakes.

At a time when these videos are threatening the privacy of users, a team led by an Indian-origin scientist has developed an artificial intelligence-driven deep neural network that can identify manipulated images at the pixel level with high precision. Amit Roy-Chowdhury, professor of electrical and computer engineering at the University of California, Riverside, has developed a high-confidence manipulation localisation architecture which utilises resampling features, LSTM cells, and an encoder-decoder network to segment out manipulated regions from non-manipulated ones.


Sign up for your weekly dose of what's up in emerging technology.

Speaking about his momentous work, Roy-Chowdhury said, “We trained the system to distinguish between manipulated and nonmanipulated images, and now if you give it a new image it is able to provide a probability that that image is manipulated or not, and to localise the region of the image where the manipulation occurred.” He added that while they are currently working on still images, this discovery can also help them detect deepfake videos.

“If you can understand the characteristics in a still image, in a video it’s basically just putting still images together one after another,” Roy-Chowdhury said. “The more fundamental challenge is probably figuring out whether a frame in a video is manipulated or not,” he added.

The researchers have discovered that even a single manipulated frame would raise a red flag, in this case. But Roy-Chowdhury thinks that they still have a long way to go before automated tools can detect deepfake videos in the wild.

“It’s a challenging problem… This is kind of a cat and mouse game. This whole area of cybersecurity is in some ways trying to find better defence mechanisms, but then the attacker also finds better mechanisms,” he said.

More Great AIM Stories

Prajakta Hebbar
Prajakta is a Writer/Editor/Social Media diva. Lover of all that is 'quaint', her favourite things include dogs, Starbucks, butter popcorn, Jane Austen novels and neo-noir films. She has previously worked for HuffPost, CNN IBN, The Indian Express and Bose.

Our Upcoming Events

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM

Council Post: Enabling a Data-Driven culture within BFSI GCCs in India

Data is the key element across all the three tenets of engineering brilliance, customer-centricity and talent strategy and engagement and will continue to help us deliver on our transformation agenda. Our data-driven culture fosters continuous performance improvement to create differentiated experiences and enable growth.

Ouch, Cognizant

The company has reduced its full-year 2022 revenue growth guidance to 8.5% – 9.5% in constant currency from the 9-11% in the previous quarter