MITB Banner

How Lucknow Police’s Decision To Deploy AI To Track Facial Expression Of Women In Distress Could Backfire

Share

Lucknow police are all geared up to set up AI-based facial recognition cameras to read facial expressions of women in distress and alert the nearest police station.

The police claimed to have identified 200 hotspots with a high movement of women and where they have received most harassment complaints from. The five AI-based cameras “will become active as soon as the expressions of women in distress change”, police said.

The new project comes under the Uttar Pradesh (UP) government’s Mission Shakti, launched in October last year. As part of the initiative, the Lucknow Commissionerate Police have started awareness drives, and seminars about ‘pink booths’ and women help desks set up by the UP government.

Here, we try to analyse how the AI-based system could go wrong and, why in general, the use of AI in policing is a bone of contention.

Faultlines

While AI is a gamechanger in problem-solving, it can also make errors and present many threats when deployed in the wrong places or wrong applications.

Experts have condemned the use of AI-based facial recognition tech as such systems collect biometric data without the person’s consent or awareness. This, in India, is very risky, owing to the country’s weak data protection laws, and can lead to mass surveillance — a direct threat to India’s democratic values. 

The move can also lead to over-policing, as had happened in the past when mass surveillance technologies were deployed. Also, the areas where marginalised communities or economically weaker sections live tend to be the petri dish for such technological experiments.

Further, facial recognition systems have failed many times in terms of accuracy in their predictions. So the question of accountability and what happens when a person is wrongly accused is a major concern.

Apart from accuracy, the algorithm can also go wrong in two ways, as it takes into account the ‘distressed’ facial expressions of women. Firstly, we don’t know how the AI defines distress. Secondly, if the AI predicts distress, it can’t predict the reason behind it. It may not necessarily be because of a crime being committed.

Lastly, another roadblock while capturing facial expressions is the use of facemasks after the pandemic. Facial recognition itself is hard when it comes to identifying a person wearing a mask. Facial expressions will be even harder.

Bad Precedent

The use of AI or predictive systems in policing has proven controversial, as the consequences have been dire.

For instance, a computer algorithm used in the US that predicted a crime recidivism score to predict the likelihood of committing another crime was biased against the black community. Black people with petty theft charges received a higher risk score than a White person involved in an armed robbery.

In England and Wales, the Home Office funded a project to develop machine learning algorithms to predict areas most likely to witness a gun or knife violence. The algorithm’s errors compromised its accuracy, and the project had to be dropped as expert reviews showed substantial ethical problems.

In 2020, the American Civil Liberties Union filed a complaint against Detroit Police for using a flawed facial algorithm to arrest the wrong man.

Another major problem with using AI in policing is transparency. Nobody knows what goes into the algorithm. Even if people can understand these algorithms, experts have pointed out that mathematics underpinning the algorithms can change. However, even with a fool-proof algorithm, there could still be bias in the system, as these past examples show.

Golden Rule

The NITI Aayog document on AI published in India proposed an “AI + X” approach. Instead of replacing every process X in its entirety with AI, one needs to consider if AI can fill a gap in the process that will improve it.

If AI will introduce more harm than good in the process X that the police are trying to accomplish, it is best not to use such systems. There is no point in aimlessly deploying AI in every process because it can improve technological accuracy, especially at the cost of citizen’s civil rights.

Share
Picture of Kashyap Raibagi

Kashyap Raibagi

Kashyap currently works as a Tech Journalist at Analytics India Magazine (AIM). Reach out at kashyap.raibagi@analyticsindiamag.com
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.