“We’ve developed a new attack on AI-driven facial recognition systems, which can change your photo in such a way that an AI system will recognise you as a different person, in fact as anyone you want,” according to Adversa AI’s official website. Adversa managed to trick facial recognition search tool PimEyes into misidentifying Vice reporter Todd Feathers as Mark Zuckerberg.
Facial recognition for one-to-one identification has become an increasingly popular AI application. Companies like Uber and Amazon authenticate employees with selfies. But facial recognition technology is not fool-proof.
Adversa AI was designed to fool facial recognition algorithms by adding alterations or noise to the original image. Called Adversarial Octopus, this technique is a black box that even the creators themselves don’t fully understand.
In a demo video, the company altered the image of CEO Alex Polyakov and tested PimEyes. PimEyes mistook Polyakov for Elon Musk.
“Your digital identity can be stolen too,” as per Adversa.
But this Adversarial Octopus technique, or similar platforms, could also be used by hackers to commit fraud and fool an identity verification system.
Analysts of information services company Experian PLC anticipate a rise in fraudsters creating “Frankenstein faces” using AI for synthetic identity fraud. Fraudsters fuse real and fake information to forge a new identity.
Cybercriminals use synthetic IDs to pass as legitimate users.
US identity verification firm ID.me Inc has reported thousands of people attempting to trick facial identification authentication to claim unemployment benefits from state workforce agencies. The company verifies individuals on behalf of 26 US states by using facial recognition software, and has found more than 80,000 attempts to bypass the selfie step in government ID matchups in a year by wearing masks, using deep fakes or holding up images or videos of other people.
Last March, the Chinese government’s facial recognition service was hacked, and more than $76 million was stolen through fake tax invoices. The hackers manipulated personal data and high-definition photos purchased on the black market and hijacked the camera of a mobile phone to fool the facial authentication step. The fraudsters fed the deep fake videos to complete the certification.
Xinhua Daily Telegraph’s investigation found the cost of hacking facial authentication systems for illegal gain is very low. Image manipulation apps like Huo Zhaopian, Fangsong Huanlian and Ni Wo Dang Nian are available for download on the app store. Apps like Zao use AI to replace faces of film or TV clips with images of anyone the user uploads. “This application places the tools of creating deep-fake videos in the smartphones and mobile devices of millions of users,” claims Zao.
According to John Spencer, CEO of biometric identity firm Veridium, you don’t need sophisticated softwares to spoof a facial recognition system. Printing a photo of someone’s face and cutting out the eyes to use the photo as a mask is one of the easiest ways to create a fake picture.
A 2012 Accenture research, found two basic biometric fraud patterns hackers exploit systematically; obfuscation and impersonation. The study found impersonation is more prevalent and easier to implement to spoof biometric authentication.
Deep fake detection
The tools and techniques to detect deep fakes are playing catch-up as the latter is evolving at a warp speed.
Alex Polyakov said it’s important to adjust the underlying algorithms to improve the robustness of AI models against novel attacks. He also stressed on the need to train the model with adversarial examples to take on the menace of deep fakes. In 2020, Microsoft launched Microsoft Video Authenticator to detect manipulated images. Verification firm ID.me have detected fraudulent selfies by tracking devices, IP addresses and phone numbers of fraudsters.
Siwei Lyuis, a computer science professor at University at Buffalo, has done extensive research on deep fakes. He has published two research papers describing ways to detect deep fakes. Louis said when a deepfake algorithm generates facial expressions, the new images don’t always map onto the person’s head, or the lighting conditions, or the distance to the camera. Such images have to be geometrically transformed and the process leaves digital footprints allowing researchers to detect the fake videos.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
I am a Liberal Arts graduate who enjoys researching new topics and writing about them. An aspiring journalist, I love to read books, go on a drive on rainy days and listen to old Bollywood music.