Advertisement

Pre-Pandemic Facial Recognition Algorithms Falter In The Presence Of Masks

“Even the best of the 89 commercial facial recognition algorithms tested had error rates between 5% and 50%.”

The ongoing pandemic has established many uncomfortable norms in every corner of the world. Wearing masks is one such norm that has been embraced by many, reluctantly. Now the question is, what happens to those facial recognition systems, which were trained on faces without masks in the pre-pandemic world?

According to a preliminary study by the National Institute of Standards and Technology (NIST) on 89 of the best commercial facial recognition algorithms, showed error in matching digitally applied face masks with photos of the same person without a mask. This study is being run under the Ongoing Face Recognition Vendor Test (FRVT) executed by the National Institute of Standards and Technology (NIST).

What Does The Report Say

Source: NIST

This report by NIST documented the accuracy of algorithms when encountered with masked faces. The evaluation was carried out on algorithms provided to NIST before the COVID-19 pandemic and were developed without expectation of an experiment like the one conducted by NIST. The NIST team explored how well each of the algorithms was able to perform “one-to-one” matching, where a photo is compared with a different photo of the same person. The function is commonly used for verification such as unlocking a smartphone or checking a passport. 

The NIST Information Technology Laboratory (ITL) quantified the accuracy of pre-COVID face recognition algorithms on faces occluded by masks applied digitally to a large set of photos that have been used in an FRVT verification benchmark since 2018.  To this end, the team at NIST used two large datasets:

  • unmasked application photographs from a global population of applicants for immigration benefits and 
  • digitally-masked border crossing photographs of travellers entering the United States.

These photographs were collected in US governmental applications that are currently in operation. The team tested the algorithms on a set of about 6 million photos used in previous FRVT studies. 

“Black masks also degraded algorithm performance in comparison to surgical blue ones.”

Source: NIST

The research team digitally applied mask shapes as depicted above, to the original photos and tested the algorithms’ performance. To simulate the real-world setting, the researchers came up with nine mask variants of different shape, colour and nose coverage. The digital masks were black or a light blue that is approximately the same colour as a blue surgical mask. The shapes included round masks that cover the nose and mouth and a larger type as wide as the wearer’s face. These wider masks had high, medium and low variants that covered the nose to different degrees. The team then compared the results to the performance of the algorithms on unmasked faces. 

The results of the study can be summarised as follows:

  • Masked images raised the failure rate of top algorithms to about 5%, while many otherwise competent algorithms failed between 20% to 50% of the time.
  • The more of the nose a mask covers the lower the algorithm’s accuracy. 
  • The shape and colour of the mask matters. 
  • Algorithm error rates were generally lower with round masks. 
  • Black masks also degraded algorithm performance in comparison to surgical blue ones, though because of time and resource constraints, the team was not able to test the effect of colour completely.

This report comes at a crucial time especially, in the US, where many government bodies have been creating hindrances to the deployment of facial recognition technology and quite rightfully so. One of the main reasons behind these regulations is the unreliability of these machine learning systems, which are alleged for biases that favour a certain community. Now, the masks pose a gigantic challenge to these computer vision systems, which are mostly functional in critical areas such as airports and on streets where masked individuals can exploit the flawed systems.

Check the full report here.

Download our Mobile App

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Is Sam Altman a Hypocrite? 

While on the one hand, Altman is advocating for the international community to build strong AI regulations, he is also worried when someone finally decides to regulate it