MITB Banner

Can This AI Filter Protect Human Identities From Facial Recognition System?

Share

Can This AI Filter Protect Human Identities From Facial Recognition System?

Illustration by Can This AI Filter Protect Human Identities From Facial Recognition System?

Facial recognition technology has been a matter of grave concern since long, as much as to that, major tech giants like Microsoft, Amazon, IBM as well as Google have earlier this year, banned selling their FRT to police authorities. Additionally, Clearview AI’s groundbreaking facial recognition app that scrapped billions of images of people without consent made the matter even worse for the public.

In fact, the whole concept of companies using social media images of people without their permission to train their FRT algorithms can turn out to be troublesome for the general public’s identity and personal privacy. And thus, to protect human identities from companies who can misuse them, researchers from the computer science department of the University of Chicago, proposed an AI system to fool these facial recognition systems.

Termed as Fawkes — named after the British soldier Guy Fawkes Night, this AI system has been designed to help users to safeguard their images and selfies with a filter from against these unfavored facial recognition models. This filter, as the researchers called it “cloak,” adds an invisible pixel-level change on the photos that cannot be seen with human eyes, but can deceive these FRTs.

When asked one of the researchers, Ben Y. Zhao, a Neubauer professor of computer science at the University of Chicago, stated to the media, that the aim of creating this AI system is to “make Clearview go away.” 

Also Read: Is There A Case Of Regulating Facial Recognition Technology?

How Does The AI Filter Work?

In reality, there are many techniques used to solve this problem of protecting identities from facial recognition systems — some take the approach of creating adversarial examples and others proposed adversarial patches. However, all these approaches have noted some fundamental limitations. Either the methods require the image to have conspicuous accessories like hats or glasses to deceive the system, or they aren’t robust enough to sustain the possibilities. Some other approaches include GAN editing facial pinpointing, which are considered immensely impractical for non-technical users. 

Thus, the researchers settled on the method of cloaking with the system Fawkes to protect human identities from third party FRT providers. To achieve this, the researchers disrupted the training of DNN models, by including small perturbations on training data, which are usually used for recognising hidden facial features of individuals for identification. The researchers leveraged this to cause misclassification of those images, letting the facial recognition system assume the wrong face of the individual.

Left: User ‘U’ applies ‘cloaking algorithm’ to generate cloaked versions of U’s photos. Right: A tracker crawls the cloaked images from online sources, and uses them to track U.

The Fawkes system leverages a three-step method to modify the photos of the users. 

  • Firstly, it examines a public image dataset and picks a target image that is the most dissimilar to the real picture, using distance function in feature space. 
  • Secondly, the system randomly selects the user image and computes a cloak based on the optimisation defined. The system uses a structural dissimilarity index, to keep the cloaked versions similar to the real photos. 
  • Thirdly, limiting content from the users’ end, where the users have to ensure that no uncloaked images are shared online, which will create leverage for the hackers.

An effective cloaking will push the FRT to associate the user’s photos with some incorrect features different from the real user. Thus, the more incorrect these features will be, the more these systems go wrong in identifying the individuals. Therefore, the researchers decided to maximise the feature deviation by computing the distance of two feature vectors and measuring the perceptual disruption caused by cloaking.

Along with this, to further advance the system, the researchers added image specific cloaking, which will prevent the system from being hacked by attackers using anomaly detection. With image specific cloaking, there will be different cloak patterns for users images which will make it even difficult to detect or remove these cloaks from the pictures.

Also Read: Can We Rely On Today’s Facial Recognition Technology To Be A Norm For Future?

Fawkes System Evaluation & Effectiveness

To evaluate the effectiveness of the Fawkes system, the researchers presented the results of cloaking methods performed on three different scenarios. Users are producing cloaking using the same feature extractor as the tracker; users and trackers using different feature extractors; and the tracker training models from scratch without feature extractor.

With the results, the researchers noted that the cloaking is showcasing immense effectiveness when users use the same feature extractor as the trackers by equipping the users’ feature extractor with adversarial training.

Fawkes is also most effective when used along with other privacy-enhancing steps that will remove all the uncloaked images of the user from online. For this, users can remove tags of their identity on social media platforms or can leverage privacy laws such as “Right to be Forgotten” to remove online content related to them.

Furthermore, the researchers tested the system in the real world with 82 high-quality images of one of the project authors and leaked them to renowned FRT systems — Microsoft Azure and Amazon Rekognition Technology. The results highlighted that the normal cloak might end up achieving an only certain percentage of protection for a few FRT systems, but the robust one indeed proved to be accurate.

Wrapping Up

Like other privacy-enhancing tools, the researchers aimed to offer Fawkes as a tool to fight hackers and stand against companies like Clearview AI. The software has been made available for developers on the researchers’ website last month and has been downloaded more than 50,000 times. The researchers are working on a free app version for noncoders, for more implementations.
Read the whole paper here.

Share
Picture of Sejuti Das

Sejuti Das

Sejuti currently works as Associate Editor at Analytics India Magazine (AIM). Reach out at sejuti.das@analyticsindiamag.com
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.