Facial recognition software has been the subject of debate for a long time now. Despite the controversy, law enforcement has been using such AI-powered software to catch criminals all around the world, and most particularly in large nations with less strict privacy laws. The use is prevalent despite the fact that the software may not work accurately when used on ethnic communities, youngsters and even women. One company which is leading the news headlines these days is Clearview AI founded by an Australian entrepreneur Hoan Ton-That.
Although Clearview AI hasn’t devised a groundbreaking facial recognition app, what it sells can be deemed as useful to law enforcement agencies. The startup has scraped billions of personal images of people around the world without their permission from the web and social media and stored them into a database. Then it utilises facial recognition technology to compare photos of said unknown individuals against its database of photographs.
So, you take a photo of an individual, upload it and get to see the public presence of that individual, in addition with links to where those photos appeared. Clearview says it has scraped from Facebook, YouTube, Venmo and millions of other websites. To think that public data has not been scraped and indexed by more capable entities feels naive and hard to believe. Yet the large majority of investigative agencies in the US have been customers of Clearview AI, even though the technology isn’t groundbreaking doesn’t mean the application isn’t a concern to society. Over 300 agencies in the US are using Clearview’s software, including agencies at the federal, state, and local levels.
According to Clearview, it has the most accurate facial identification software globally, with a 98.6% accuracy rate. But the company had said in a document that does not mean that users will have matches for 98.6% searches, but they will almost never get a false positive. Users either get a correct match or no results, and the hit rate is between 30-60%. Currently, a photo should work even if a person grows a beard, wears glasses, or appears in bad lighting.
The Backlash Began In 2020
Clearview AI faced a lot of problems in 2020 itself. At the beginning of the year, Twitter sent a cease and desist letter to the startup and asked them to delete all collected data, followed by a similar request from YouTube and Facebook. New Jersey Attorney General Gurbir Grewal sent a cease and desist letter to Clearview AI as the company included the photos of the Attorney General in its promotional video. After this, Grewal prohibited the usage of the technology in all counties of New Jersey and said that the state needed a further understanding of what was happening with facial recognition technology before using such products in solving criminal cases.
In February 2020, multiple sources reported that Clearview AI had experienced a data breach, exposing its list of customers. Clearview’s attorney, Tor Ekeland stated the flaw has been patched. In April 2020, TechCrunch wrote a security firm called SpiderSilk found Clearview’s source code repositories that had been exposed due to a misconfigured user security setting. Secret keys and credentials, including cloud storage and Slack tokens, had been breached. Talking about security, Gizmodo discovered Clearview’s Android application package in an alleged unsecured Amazon S3 bucket, even though it is supposed to be privately accessible to only clients. The company has stated earlier that it has completed multiple independent security reviews, and has a dedicated cybersecurity team which monitors and protects the security of user accounts and data.
Clearview earlier said it provides access to its database to law enforcement agencies for solving criminal cases. But after the data breach, it was found that the startup sold its software subscription to private corporations which were found on its client list. The data breach not just exposed the company’s doings but also proved its weak cybersecurity system despite owning such sensitive personal data.
After the startup announced its plan to expand to the outside US, a media report found that the startup was planning on selling its technology to “authoritarian regimes” in Asia and South America as well as other nations in Europe which have to abide by GDPR. In fact, the European Data Protection Board stated recently that the use of a service like Clearview AI by law enforcement agencies in the European Union would, as it stands, is likely not be consistent with the European data protection regulation. Now, The British and Australian data protection authorities have started an investigation into the facial recognition technology firm Clearview AI.
Download our Mobile App
“Clearview’s technology is a real nightmare scenario. They have amassed a database of more than 3 billion faceprints of people from around this country without consent, without telling us. And they are using it to market an incredibly dangerous tool to police, to private corporations, and private individuals to track and follow and surveil us,” stated Nathan Freed Wessler, Staff Attorney, The American Civil Liberties Union (ACLU) Speech, Privacy and Technology Project in an interview. ACLU has also sued the facial recognition startup for violation of the Illinois Biometric Information Privacy Act (BIPA), stating the firm has unlawfully amassed and stored data on Illinois citizens without their cognition or consent and then made money off it by selling its software to law enforcement and private corporations.
Clearview said it was only providing access to law enforcement in the US and Canada, but it was found that there are a total of 26 countries outside the US, where the company is selling its software to law enforcement and police organisations, including Lithuania, Malta, the Netherlands, Norway, Portugal, Belgium, Italy, Latvia, Denmark, Finland, France, Ireland, Slovenia, Spain, Sweden, Switzerland, and the United Kingdom. The client list also includes many private corporations as well. “We’re in a profound moment of protest against racial injustice and police abuses. The last thing we need is this dangerous technology welded as a tool to identify and track us around the world,” said Wessler in this video.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
What's Your Reaction?
Vishal Chawla is a senior tech journalist at Analytics India Magazine and writes about AI, data analytics, cybersecurity, cloud computing, and blockchain. Vishal also hosts AIM's video podcast called Simulated Reality- featuring tech leaders, AI experts, and innovative startups of India.