A Toolkit for the Tricky World of AI Policing

The policing system in India has been one of the leading adopters of AI. But, these technologies being deployed without a legal regime is unsettling for civil society groups
Listen to this story

In the wake of growing biases and misuse of AI, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and the INTERPOL have come up with a toolkit for responsible AI innovation in law enforcement to support and guide enforcement authorities around the world. This toolkit would help them deploy and use AI ethically and responsibly. 

“Even though the use of AI by law enforcement is perhaps one of the most-sensitive domain applications of the technology, there is no guidance available for law enforcement agencies to help them get it ‘right’. In our toolkit, we have tailored practical and operationally oriented guidance suited to their needs and requirements,” said Odhran McCarthy, project officer at UNICRI’s Centre for AI and Robotics.

The toolkit will consist of seven resources, for eg, guidance documents such as an introductory guide, principles for responsible use, and an organisational roadmap. It will also include interactive tools like an organisational readiness assessment and AI risk assessment tool. Additionally, there will be target audience-specific briefs called AI for Law Enforcement Primers.

AI not only helps with creating statistical correlation between vast amounts of data to sort and manage information, but also aids policing using technology like the vastly-debated facial recognition systems, and predictive policing systems.

McCarthy told AIM that they have identified several countries around the world which would support them in testing of the toolkit. “Testing is a key part of our work to ensure that the toolkit is useful and practical from a law-enforcement standpoint.” The first version of the toolkit will be released in June and the final version will be out around October, this year.

What about India? 

The policing system in India has been one of leading adopters of AI across the globe. However, these technologies are being deployed without a legal regime and are unsettling for civil society groups, like IFF, that are raising their voice for internet liberties.

Recently, the Bangalore City Police installed 4,100 out of 7,000 CCTVs equipped with facial recognition systems with the help of Honeywell Automation India as part of the Bengaluru Safe City project. As part of the project, 30 ‘safety islands’, 8 drones, 400 body-worn cameras, and a mobile command centre is also available at the disposal of the state law enforcement.

AIM reached out to Honeywell India Limited regarding the ethical concerns and reliability issues around this, but a response is still awaited from them.

Meanwhile, according to a study conducted by TechSci, India’s facial recognition market is projected to grow six-fold to reach USD 4.3 billion by 2024, putting it in close competition with China’s state surveillance system. Amnesty International has labelled Telangana as “the most surveilled state in the world” with 600,000 cameras, mostly concentrated in Hyderabad. 

New Delhi, Hyderabad, Chennai, and Indore are among the cities in India with the highest number of surveillance cameras, with New Delhi topping the list in terms of cameras per square mile. The Kolkata Police have also announced plans to install 2,500 CCTV cameras that will utilise AI to detect bikers without helmets and illegal parking.

On the other hand, the Bihar Police plan to utilise predictive policing and AI technology to combat illicit liquor trade and other crimes. However, there is limited evidence to support the notion that surveillance has had a negative impact on crime rates. According to a report from the California Research Bureau, despite the increased prevalence of CCTV cameras, there is little conclusive evidence that they have led to a decrease in crime. Similarly, a study conducted by the Arizona State University found no significant effect on criminal activity in a city as a result of the installation of surveillance cameras.

Policing the AI Future Responsibly 

Maknoon Wani, a research associate for emerging tech and policy with the Council for Strategic and Defense Research explained how these technologies are not perfect and the deployment is being rushed, “There are several issues with deploying facial recognition systems and technologies in public places. Basically, these are still developing technologies and are not perfect yet. But the law enforcement agencies see these as sophisticated technologies that they can deploy to get better results and achieve efficiency. In reality, that really doesn’t actually happen, because these technologies are inefficient. These facial recognition systems can give false positives and false negatives, and that’s a major issue,” he said.

He further said that not having any laws governing these technologies could result in human rights violations and false incriminations. “All of these technologies are being developed and deployed without a legal regime which can offer safeguards. For example, if a person is arrested in a public place based on the fact that the facial recognition technology identified them as a criminal or having a criminal record, then they do not have a way to get a redressal.” 

However, data from a report by the US government’s National Institute of Standards and Technology suggested that the top facial recognition algorithms in the world are highly accurate and have marginal differences in their rates of false-positive or false-negative readings across demographic groups. The top 150 algorithms used for the test on the focus group were over 99% accurate across black male, white male, black female and white female demographics. 

So the controversy around the accuracy and bias in these systems is a point of contention, but given India’s enthusiasm on AI in policing and law enforcement, the toolkit for responsible AI innovation in law enforcement should be considered and employed to address these issues and ensure minimal bias, and trampling of human rights and individual privacy.

Download our Mobile App

Shyam Nandan Upadhyay
Shyam is a tech journalist with expertise in policy and politics, and exhibits a fervent interest in scrutinising the convergence of AI and analytics in society. In his leisure time, he indulges in anime binges and mountain hikes.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

6 IDEs Built for Rust

Rust IDEs aid efficient code development by offering features like code completion, syntax highlighting, linting, debugging tools, and code refactoring

Can OpenAI Save SoftBank? 

After a tumultuous investment spree with significant losses, will SoftBank’s plans to invest in OpenAI and other AI companies provide the boost it needs?

Oracle’s Grand Multicloud Gamble

“Cloud Should be Open,” says Larry at Oracle CloudWorld 2023, Las Vegas, recollecting his discussions with Microsoft chief Satya Nadella last week.