Listen to this story
Recently, an RTI filed by the Internet Freedom Foundation (IFF) revealed that the Delhi Police is using Facial recognition technology (FRT) to nab rioters in the capital city.
This has caused an uproar as many members of the civil society raised concerns and called the Delhi Police’s use of FRT ‘unethical’ in the absence of a Data Protection Act in the country. The argument being made by them is national security should not come at the cost of privacy.
Technology such as FRT has been controversial, and authorities leveraging such tech is definitely a concern.
Sign up for your weekly dose of what's up in emerging technology.
Why is Delhi Police using FRT?
The RTI filed by IFF revealed that the procurement of the FRT by the Delhi Police was authorised as per a 2018 direction of the Delhi High Court in Sadhan Haldar v NCT of Delhi. Delhi Police wanted to use the tech to find missing children in the capital.
Later that year, it was revealed that the tech was not reliable as its accuracy was just 2 per cent.
Download our Mobile App
However, in 2020, in an RTI response, Delhi Police revealed that they are using the FRT tech procured in 2018 in their police investigations.
Responding to the RTI filed by the IFF, the Delhi Police revealed that they use a threshold of 80 per cent which means someone with 80 per cent similarity is treated as a positive result.
Further, if the score is below 80 per cent, they are treated as false positives, which means they will still be on the police radar and will be further investigated.
“The accuracy of the results in facial recognition depends on three parameters in every questionable photograph-light condition, distance and the angle of the face,” the Delhi Police said in the RTI response.
“The database under which the Delhi Police trained the FRT was taken from the dossier and the convict photographs that the police maintained under sections three and four of the now-repealed the Identification of Prisoners Act, 1920,” said Gyan P Tripathi, Policy Trainee at Internet Freedom Foundation.
The FRT is being used in over 750 cases relating specifically to the north Delhi riots, which took place in February 2020. The tech is also being used by Delhi Police to identify rioters in the 2021 Red Fort riot, which took place amid the farmers’ protest and the recent Jahangirpuri violence, the RTI revealed.
Why is it a concern?
First of all, it is unclear why the threshold has been set at 80 per cent. Secondly, there is no definitive jurisdiction to back Delhi Police’s assertion that an above 80 per cent match is sufficient to assume the results are correct.
Tripathi also raises the concerns that even in cases where the similarities are below 80 per cent, the Delhi Police will not rule out the suspect altogether. In such cases, someone with similar facial characteristics, for example, a sibling or a family member, could be falsely arrested.
In the absence of a data protection law in the country, there are no grounds to stop authorities from misusing such technologies and to hold them accountable. Further, there are no clear structures or regulations in place that define how authorities should use these technologies.
Further, the data must be used specifically for the reason they were collected, and it must be deleted from time to time. However, that does not seem to be the case. Further, claiming such tech would not be misused is almost utopian.
In fact, last year, Delhi Police revealed that over 400 of its personnel were placed under suspension, and another 1,325 were punished for misconduct in 2020.
The reliability of FRT is dicey; however, so is the reliability of the authorities. Now a controversial tech such as FRT at the disposal of the authorities should concern every citizen in the country.
Another concern associated with the use of FRT is that it could be used for surveillance. Delhi already boasts of the highest number of CCTVs per square mile among major cities of the world.
“The increasing use of FRT by the police in India is set to expand further over the next few years. However, FRT is not an error-free technology. If the training database of FRT has an over-representation of certain types of faces, the technology tends to be better at identifying such faces.
“Even if it does not have a training bias, the technology is rarely completely accurate and can easily misidentify faces. This means that there are chances of innocent people being wrongly identified as criminals or suspects,” the Vidhi Centre for Legal Policy said in a blog post.
Jai Vipra, who authored the blog post, said that the use of FRT by authorities could aggravate the historical systemic bias of policing against minorities in India.
Technologies such as FRT in the hands of authorities could pose an inherent threat to the citizen’s privacy and information security. Last year, the European Parliament voted in favour of banning FRT by law enforcement agencies.
Across the globe, concerns are being raised about the use of these technologies. In fact, in many jurisdictions, progressive steps are being taken to dismiss or minimise the use of FRT.
“When it comes to FRT, there are no best cases because it is a type of dual-use technology. Even if authorities claim it is being used for security purposes and is being touted to be used in a better way, ultimately, authorities have an incentive to repurpose it and use it in their investigation. It gets tricky because facial recognition also depends on metrics such as illumination, angle, face and other facial features,” Tripathi said.
The use of FRT by authorities also adds to the concern of surveillance by the state. Oftentimes, the government has been accused of spying on its citizens.
“There are public statements made by the Minister for Home Affairs that they’ve used the driving licence and other government-issued identity cards to enrich the facial recognition database. All of this should be stopped,” Tripathi said.
Besides a data protection law, there is also a pertinent need for surveillance reforms. “The way surveillance is carried out by law enforcement agencies and police departments are very opaque, and there is also less accountability,” Tripathi said.
Last year, a collaborative investigative journalism initiative undertaken by 17 media organisations globally alleged that the Indian government was using the Pegasus spyware to spy on political strategists, journalists, activists, members of civil society and human rights groups and also notable members of the opposition party.
There is a growing need for different stakeholders from civil society, members of opposition to human right advocates to come together and demand the banning of controversial technology such as RFT. However, what’s been done so far is not enough.
Earlier this month, on August 17, 2022, Home Minister Amit Shah inaugurated the National Automated Fingerprint Identification System (NAFIS) at the National Securities Strategies Conference held in New Delhi.
This builds on the fingerprint analytics and technology system 5.0, which was discontinued in 2017. Reports indicate that NAFIS will store fingerprints of everyone that gets arrested by the authorities, and they will be assigned a 10-digit number, and the data will be kept for their lifetime.
Now, this is also problematic, according to Tripathi. “Firstly, we do not have a data protection law. Secondly, if they say that they’ll keep it for the lifetime of an individual, how do you determine whether a person is dead or alive? So there are fuzzy data retention storage and processing practices which are not really clear. This usually gives them leverage and discretion in how they keep this data,” Tripathi concluded.