Advertisement

Active Hackathon

Everything Wrong With Uber’s Facial Recognition System

UK-based App Drivers and Courier’s Union (ADCU) and Workers Info Exchange (WIE) have called on Microsoft to suspend the sale of its Face detection API to Uber. 

The union said there were at least seven cases of failed facial recognition and identity checks which led to the suspension of drivers and, in some cases, license revocation by Transport for London (TfL). This month, Wired reported similar instances in which Uber Eats drivers alleged the tool has failed to recognise their faces.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

The company spokesperson, however, maintained that such verification is necessary to avoid potential fraud. The spokesperson also claimed the decision to remove drivers involved human review. 

Face API

Uber launched the ‘Real-Time ID Check’ system in the UK in April 2020. The system asks drivers for real-time photos that are then matched against their photos in the company database and their licenses. In their statement, the company had said drivers can choose between the photo comparison software and human reviewers for selfie verification. In case of a mismatch, the company would notify the driver his/her account would be temporarily suspended.

Uber had introduced the Real-Time ID check system in 2017. The system is based on Microsoft’s Face API. The Microsoft facial recognition system is part of its Cognitive Services. In particular, Uber utilises two main APIs:

Face-Detect: It detects human faces in a given image. It can identify specific attributes, such as whether the user is wearing glasses or not. As per Uber, this API has improved the match score to verify selfies and discard images without a human face.

Face-Verify: This API compares the face detected by the Face-Detect API with the database picture. Based on the similarity between the two, Face-Verify API provides a ‘confidence score’ to check whether the face is a match. The score is also used to advise appropriate verification action, such as asking the user to retake the selfie.

Amid the rise of complaints against the facial recognition system, Uber told the media: “The two situations raised do not reflect flawed technology — in fact, one of the situations was a confirmed violation of our anti-fraud policies, and the other was a human error.”

The statement further said, “While no tech or process is perfect and there is always room for improvement, we believe the technology, combined with the thorough process in place to ensure a minimum of two manual human reviews prior to any decision to remove a driver, is fair and important for the safety of our platform.” 

Notably, this is not the first time Uber finds itself in hot water over its facial recognition system. In 2019, an African American driver in the US claimed he was fired after the system did not recognise him in ‘pitch darkness’ prompting him to lighten his photos artificially.

On the other hand, in response to the union’s allegation, Microsoft has said the company is committed to improving Face API to ensure fairness and accuracy across demographics.

Bias In Facial Recognition Tech

A 2018 study authored by Joy Buolamwini and Timnit Gebru, looked at three commercial gender classification systems. The results showed darker-skinned females were the most misclassified groups, with error rates as high as 34.7 percent. The error rate for lighter-skinned males, on the other hand, was found to be 0.8 percent. The bias mostly emerges from machine learning algorithms trained on labelled data. Studies in the past have shown that labelled data may be biased and training a machine with it could result in algorithmic discrimination.

The substantial disparities in classifying genders and races posed a significant challenge, the study noted. If commercial companies were to build ‘genuinely fair, transparent and accountable facial analysis algorithms’, these factors should be accounted for, the study prescribed.

More Great AIM Stories

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.

Our Upcoming Events

Conference, Virtual
Genpact Analytics Career Day
3rd Sep

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

Council Post: How to Evolve with Changing Workforce

The demand for digital roles is growing rapidly, and scouting for talent is becoming more and more difficult. If organisations do not change their ways to adapt and alter their strategy, it could have a significant business impact.

All Tech Giants: On your Mark, Get Set – Slow!

In September 2021, the FTC published a report on M&As of five top companies in the US that have escaped the antitrust laws. These were Alphabet/Google, Amazon, Apple, Facebook, and Microsoft.

The Digital Transformation Journey of Vedanta

In the current digital ecosystem, the evolving technologies can be seen both as an opportunity to gain new insights as well as a disruption by others, says Vineet Jaiswal, chief digital and technology officer at Vedanta Resources Limited

BlenderBot — Public, Yet Not Too Public

As a footnote, Meta cites access will be granted to academic researchers and people affiliated to government organisations, civil society groups, academia and global industry research labs.