MITB Banner

A Toolkit for the Tricky World of AI Policing

The policing system in India has been one of the leading adopters of AI. But, these technologies being deployed without a legal regime is unsettling for civil society groups

Share

Listen to this story

In the wake of growing biases and misuse of AI, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and the INTERPOL have come up with a toolkit for responsible AI innovation in law enforcement to support and guide enforcement authorities around the world. This toolkit would help them deploy and use AI ethically and responsibly. 

“Even though the use of AI by law enforcement is perhaps one of the most-sensitive domain applications of the technology, there is no guidance available for law enforcement agencies to help them get it ‘right’. In our toolkit, we have tailored practical and operationally oriented guidance suited to their needs and requirements,” said Odhran McCarthy, project officer at UNICRI’s Centre for AI and Robotics.

The toolkit will consist of seven resources, for eg, guidance documents such as an introductory guide, principles for responsible use, and an organisational roadmap. It will also include interactive tools like an organisational readiness assessment and AI risk assessment tool. Additionally, there will be target audience-specific briefs called AI for Law Enforcement Primers.

AI not only helps with creating statistical correlation between vast amounts of data to sort and manage information, but also aids policing using technology like the vastly-debated facial recognition systems, and predictive policing systems.

McCarthy told AIM that they have identified several countries around the world which would support them in testing of the toolkit. “Testing is a key part of our work to ensure that the toolkit is useful and practical from a law-enforcement standpoint.” The first version of the toolkit will be released in June and the final version will be out around October, this year.

What about India? 

The policing system in India has been one of leading adopters of AI across the globe. However, these technologies are being deployed without a legal regime and are unsettling for civil society groups, like IFF, that are raising their voice for internet liberties.

Recently, the Bangalore City Police installed 4,100 out of 7,000 CCTVs equipped with facial recognition systems with the help of Honeywell Automation India as part of the Bengaluru Safe City project. As part of the project, 30 ‘safety islands’, 8 drones, 400 body-worn cameras, and a mobile command centre is also available at the disposal of the state law enforcement.

AIM reached out to Honeywell India Limited regarding the ethical concerns and reliability issues around this, but a response is still awaited from them.

Meanwhile, according to a study conducted by TechSci, India’s facial recognition market is projected to grow six-fold to reach USD 4.3 billion by 2024, putting it in close competition with China’s state surveillance system. Amnesty International has labelled Telangana as “the most surveilled state in the world” with 600,000 cameras, mostly concentrated in Hyderabad. 

New Delhi, Hyderabad, Chennai, and Indore are among the cities in India with the highest number of surveillance cameras, with New Delhi topping the list in terms of cameras per square mile. The Kolkata Police have also announced plans to install 2,500 CCTV cameras that will utilise AI to detect bikers without helmets and illegal parking.

On the other hand, the Bihar Police plan to utilise predictive policing and AI technology to combat illicit liquor trade and other crimes. However, there is limited evidence to support the notion that surveillance has had a negative impact on crime rates. According to a report from the California Research Bureau, despite the increased prevalence of CCTV cameras, there is little conclusive evidence that they have led to a decrease in crime. Similarly, a study conducted by the Arizona State University found no significant effect on criminal activity in a city as a result of the installation of surveillance cameras.

Policing the AI Future Responsibly 

Maknoon Wani, a research associate for emerging tech and policy with the Council for Strategic and Defense Research explained how these technologies are not perfect and the deployment is being rushed, “There are several issues with deploying facial recognition systems and technologies in public places. Basically, these are still developing technologies and are not perfect yet. But the law enforcement agencies see these as sophisticated technologies that they can deploy to get better results and achieve efficiency. In reality, that really doesn’t actually happen, because these technologies are inefficient. These facial recognition systems can give false positives and false negatives, and that’s a major issue,” he said.

He further said that not having any laws governing these technologies could result in human rights violations and false incriminations. “All of these technologies are being developed and deployed without a legal regime which can offer safeguards. For example, if a person is arrested in a public place based on the fact that the facial recognition technology identified them as a criminal or having a criminal record, then they do not have a way to get a redressal.” 

However, data from a report by the US government’s National Institute of Standards and Technology suggested that the top facial recognition algorithms in the world are highly accurate and have marginal differences in their rates of false-positive or false-negative readings across demographic groups. The top 150 algorithms used for the test on the focus group were over 99% accurate across black male, white male, black female and white female demographics. 

So the controversy around the accuracy and bias in these systems is a point of contention, but given India’s enthusiasm on AI in policing and law enforcement, the toolkit for responsible AI innovation in law enforcement should be considered and employed to address these issues and ensure minimal bias, and trampling of human rights and individual privacy.

Share
Picture of Shyam Nandan Upadhyay

Shyam Nandan Upadhyay

Shyam is a tech journalist with expertise in policy and politics, and exhibits a fervent interest in scrutinising the convergence of AI and analytics in society. In his leisure time, he indulges in anime binges and mountain hikes.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.