Facial recognition technology, which is widely used by law enforcement agencies, has amassed its fair share of controversies internationally. From Amazon, Facebook and Microsoft, leading tech giants have come under the scanner because the facial recognition they sold to the law enforcement agencies was allegedly used to track users.
These big tech corporations apply computer vision technology to identify images from large datasets. Facial recognition technology works with live cameras that capture images of people’s faces and seek to identify them — in more places and in real time. But this powerful technology has been gathering negative press for being used to track users without prior permission or knowledge.
This year saw employees from Amazon rallying against the use of this tech by the Immigration and Customs Enforcement (ICE) in the US. In a similar vein, Salesforce employees had also raised an alarm against the use of their products by immigration authorities. This has led to a rise in demand by tech employees to limit the way government agencies use facial recognition technology.
Another pressing concern faced by this tech is the bias in facial tech wherein, as Microsoft blog says, the technology works more accurately for whites and identified people with lighter colour more accurately than people of colour. The blog further highlights that people from researchers are striving to minimise the bias in tech. It is the relative immaturity of the technology which has raised concerns among policymakers and has also raised pressing ethical concerns.
Lack Of Diverse Datasets Is Reason For Inaccuracy In Facial Tech
Reports indicated that Redmond giant Microsoft was flagged by MIT Media Lab for its AI-based facial recognition tech which performed more accurately on lighter-skinned people.
The report further indicated that based on these findings, Microsoft’s facial tech should not be used for surveillance purpose. In response, Microsoft revealed revealed a significant improvement in facial tech wherein the company reduced the error rates for men and women with darker skin by up to 20 times. For all women, the company said the error rates were reduced by nine times.
The team worked with experts on bias and fairness across Microsoft to improve their system, known as the gender classifier, which focuses largely on pushing improved outcomes for all skin tones. The blog highlighted how the Face API team made three major changes:
- Expanded and revised training and benchmark datasets
- Launched new data collection efforts to further improve the training data by focusing specifically on skin tone, gender and age
- Improved the classifier to produce higher precision results
Subsequently, Microsoft’s Brad Smith has also called for stricter regulation of facial technology and called for government regulation and has pushed for stricter norms around acceptable uses. He urged that if governments wish to use the technology, they should proactively manage its use as well. Smith called for initiatives from the Government to use facial recognition technology, informed by a bipartisan and expert commission.
Top Ethical Concerns Related To The Use of Facial Tech By Law Agencies
- The use of facial recognition technology by law enforcement agencies can be subjected to undue controls, leading to restrictions and can be leveraged as evidence of an individual’s guilt or innocence of a crime
- Civilian oversight and accountability in the usage of facial recognition, especially as part of the governmental national security program
- If facial recognition technology becomes widespread in surveillance, what kind of legal measures will be provided to prevent misuse and avoid it being used for racial profiling and other violations of rights
- Now that facial recognition technology is deployed for surveillance purpose, should the technology used by public authorities feature minimum performance levels when it comes to accuracy
- There are also larger concerns about privacy, security and question about civil liberties in the use of video surveillance technology and biometric technology
With governments across the globe beginning to deploy facial recognition software for surveillance purpose, there is a need for stricter governance to prevent the abuse of technology. For example, Government agencies can hold biometric data without the consent of citizens. Hence, it is important to establish clear rules for permissible use of facial recognition technology and intelligence gathered. Law enforcement agencies should be subjected to regulations and adhere transparency required.
Register for our upcoming events:
- Meetup: NVIDIA RAPIDS GPU-Accelerated Data Analytics & Machine Learning Workshop, 18th Oct, Bangalore
- Join the Grand Finale of Intel Python HackFury2: 21st Oct, Bangalore
- Machine Learning Developers Summit 2020: 22-23rd Jan, Bangalore | 30-31st Jan, Hyderabad