In January 2020, a US citizen Robert Julian-Borchak Williams was arrested by the Detroit Police Department. He was taken into custody on the charge of shoplifting, more than a year later after the incident had taken place, and interrogated as the prime suspect, as reported by the New York Times here. According to the report, the police had used still images from a surveillance video, which had been used to match against the nationwide database. As it was found later, he was not the actual criminal.
Even though Mr Williams had denied he had done the robbery, the police failed to do its due diligence and he had to get legal help to find his way out of the detention centre. This marked the first-ever case around the world when a man has been wrongly accused of a crime he had not committed. This happened due false-positive confirmation of facial recognition software and the gross negligence of the Detroit Police Department- which relied solely on the technology and without collecting enough evidence.
The shoplifting happened in October 2018, after which a security person for the store had seen and retrieved security video which provided to the Detroit Police Department. In March 2019, a digital image examiner for the Michigan State Police, uploaded a still image from the video, displaying the robber to the state’s facial recognition database. Based on that video, facial recognition was used to identify Mr Williams as a suspect following a match. Here the security person for the store was shown a photo lineup from the police. In January 2020, Mr.Williams was arrested by the police and then arraigned on January 10, 2020, and received a $ 1000 bond.
The incident has brought to light why facial recognition systems are far from perfect and potentially dangerous, not only in terms of violating privacy but also being abused by law enforcement, even unwittingly. There are many factors to why the incident happened, and the major factor is the incapability of the officers to understand that a false positive can be generated by facial recognition.
Facial recognition systems are often promoted as having near 100% accuracy, which is a wrong claim as the studies have much smaller sample sizes than would be needed for large scale applications. Because facial recognition is not entirely accurate as in many cases, a list of potential matches is generated, which a human operator must then look through for possible matches, and studies reveal that the operators pick the correct match out of the list many times. This, at times, causes the issue of false positives due to manual reliance. The similar thing happened in the recent event when the police approached the loss-prevention contractor, who was identified from the video feed.
Researchers Have Warned Against Facial Recognition Many Times
It is also to be noted that Robert Julian-Borchak is an African American person, and experts have noted that facial recognition is quite inaccurate among non-white people. It is widely debated as to whether or not facial recognition technology works less accurately on people of colour. Research by Joy Buolamwini (MIT Media Lab) and Timnit Gebru (Microsoft Research) discovered that the error rate for gender recognition for women of colour in three commercial facial recognition systems was between from 23.8% to 36%, in comparison to lighter-skinned men, where the error rate ranged only 0.0 to 1.6%. It has been proven software is more likely to be inaccurate when applied on black individuals – a finding supported by the FBI’s own research.
The big margins of error in this technology, both legal experts and facial recognition software companies themselves have said that the software should only supply a part of the case, and not as evidence that can cause an arrest of a person. Facial recognition technology is thought of a flawed biometric, and in a research led by Georgetown University, Claire Garvie concluded that “there’s no consensus in the scientific community of a positive identification of somebody using facial recognition. According to the researchers, the government’s use of facial recognition is a major challenge, in terms of the low-quality search images that the systems currently in use should be tested rigorously for accuracy and bias.
The prosecutor, in this case, released a statement saying in 2019 when the Detroit Police Department asked for the adoption of their Facial Recognition Policy, he declined and cited studies on the unreliability of the software, especially as concerns to people of colour. He said that any case presented to the prosecution office which has used this technology must be presented to a supervisor and must have corroborative evidence outside of the technology. According to the document, the present case took prior to this policy, but the case should not have been issued based on the DPD investigation. The prosecutor apologised for the difficulty caused to Mr Williams.
It is coincidental that just recently big tech companies including Amazon, Microsoft and IBM announced they will stop or pause their facial recognition offerings for law enforcement as a symbol of their solidarity with protests against police brutality in the US, and supporting black rights.
Even though big tech has currently stopped its offerings to law enforcement, it may not be enough. According to New York Times, the main supplier of facial recognition systems to law enforcement in the US are not big tech companies, but smaller companies such as Clearview AI, Cognitec, Rank One Computing, Vigilant Solutions, NEC, and other technology vendors.
The Detroit police incident has surfaced at a time of intense scrutiny of policing and surveillance tools, and widespread protests after the killing of George Floyd in Minneapolis police custody in late May. While such incidents have been happening around the world, there is also a wave of recognition of why advanced technologies can infringe upon citizen civil liberties.
A lot of places around the world has already accepted the limitations of technology and mandated the use against facial recognition. A report by the civil liberties organisation Big Brother Watch in 2018 announced that two UK police forces, South Wales Police and the Metropolitan Police, were applying live facial recognition in public spaces, and in September 2019, South Wales Police application of facial recognition was ordered to be lawful.
Recently, California city Santa Cruz has banned the utilisation of predictive policing technology in a decision which is considered the first of its kind in the US and around the world. The city council in Santa Cruz voted unanimously to ban police application of predictive software, which also includes artificial intelligence to analyse crime patterns and predicts where police should patrol, without explicit approval from the city’s elected representatives.
In another incident, a group of elected representatives announced they would introduce bicameral legislation to stop government use of biometric technology, including facial recognition tools. The bill, The Facial Recognition and Biometric Technology Moratorium Act, seeks to ban the use of facial recognition and other biometric surveillance technology by federal law enforcement agencies. The legislation would also make federal funding for state and local law enforcement contingent on the enactment of similar bans.