Why Do Facial Recognition Systems Still Fail

In June, IBM’s CEO Arvind Krishna wrote, in a letter to Congress that IBM firmly opposes and condones the use of facial recognition technology for mass surveillance, racial profiling, violations of basic human rights and freedoms. IBM’s withdrawal was followed by Amazon’s announcement of implementing a one-year moratorium on police use of its technology Rekognition. And, later Microsoft announced that it is banning police from using its facial recognition tech until federal regulation is in place. 

Facial recognition technology (FRT) is one of the most direct applications of AI that the world is relatively more familiar with. The public interacts with this technology in their everyday lives; at malls, local stations, schools and even through their own handheld devices. The pervasiveness of this technology has also resulted in an ambiguous situation that exposes the ethical aspects of using technology. The abuse of this innovation is as popular as the technology itself. There have been numerous accounts of banning the FRT has been done by many states in the last year alone. 

One of the main reasons behind widespread scepticism is the inefficiencies of the algorithms; biases are still a major challenge for the ML community. For example, a study done by the National Institute of Standards and Technology (NIST) on 89 of the best commercial facial recognition algorithms, showed error in matching digitally applied face masks with photos of the same person without a mask. In the face of this socio-cultural and scientific hurdle, it is important to understand how FRT performs when it is deployed in a new scenario.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

To address these challenges, Stanford released a report that included recommendations and assessments. This report resulted from a Stanford AI workshop conducted back in May, which was attended by the leading computer scientists, legal scholars, and representatives from industry, government, and civil society. The aim of this work, stated by the authors, is to understand the operational and human impacts of this emerging technology and whether or not FRT is ready for societal deployment. 




Challenges Of Deploying FRT

Source: NIST

“The accuracy of FRT in one domain does not translate to its uses in other domains.”

A central concept of current machine learning is that accuracy guarantees are largely domain-specific: good performance on a certain type of image data does not necessarily translate to another type of image data. The central role of accuracy in these debates, wrote the authors, likely explains why so much proposed legislation has called for rigorous assessments of performance. The authors categorised the challenges of deploying efficient FRT systems as — domain shift and institutional shift. 

When it comes to facial recognition technology, domain shift arises from the difference between the types of images used by vendors and third-party auditors to train models and test performance, and the types of images used by FRT consumers to perform their desired tasks. While the datasets used by vendors are not disclosed, there is reason to believe that there are substantial differences between vendor and user images: they may have different face properties, like skin colour, hair colour, hairstyles, glasses, facial hair and age, lighting, blurriness, cropping, quality, amount of face covered, etc. Vendor and user images likely come from different distributions. 

Whereas, the institutional shift can also lead to performance differences. This can arise due to misunderstanding of the technology by users, overestimating the effectiveness of the system and more. Underlining how different developing and deploying ML models are, the authors explained that FRT vendors train their images with well-lit, clear images and with proper software usage from machine learning professionals, but during deployment, clients such as law enforcement may use FRT based on live video in police body cameras, later evaluated by officers with no technical training. 

What Do Experts Recommend

“Sometimes users may over-rely on machine output–automation bias.”

So, how to solve these glaring loopholes? Although the Stanford report stays away from offering any specific technical solutions–given the uncertainty– they do however have few recommendations that can serve as a deployment guide.

  • When using public datasets, vendors and third parties should maintain an up-to-date list of the datasets used for each software release.
  • For private datasets, parties should disclose the full training/testing data, along with documentation.
  • Vendors should enable users to label their own data and test the vendor’s facial recognition models using that data.
  • Vendors should hence provide detailed release notes that include changes to training data, training algorithms, parameters, fairness constraints and any other aspects that might influence performance. 
  • Users should not rely solely on NIST benchmarks that may not reflect performance in the domain for which an FRT system is procured.

The authors also have acknowledged the role of media organisations, investigative journalists and others in the discussion around the use of AI in public-facing contexts and why they are significant in the future. “We believe that adopting the recommendations above — by regulation, contract, or norm — will do much to improve our understanding, reflection, and consideration of one of the most important emerging applications of AI today,” concluded the report.

Read the full report here.

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

Download our Mobile App

MachineHack

AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.