Microsoft’s M12 venture fund recently released a statement saying that they are divesting its shareholding in AnyVision, an Israeli startup that works on facial recognition technology. Microsoft made this decision after an audit by Eric Holder of Covington and Burling LLP. The former attorney general and his team have confirmed that AnyVision technology was used in border crossing checkpoints between Israel and the West Bank.
How Did This Unfold?
The whole AnyVision fiasco was first reported by NBC back in October 2019. According to the reports, AnyVision used facial recognition to surveil Palestinians throughout the West Bank. AnyVision’s CEO Eylon Etshtein even threatened to sue NBC News. However, they later changed their stance.
Ever since the news of Israeli’s surveillance of Palestinians broke out, Microsoft was under fire for collaborating with AnyVision. This led to Microsoft’s M12 to investigate further into the allegations. And, after a thorough review by the former US attorney general Eric Folder, Microsoft decided to withdraw their investment in AnyVision. Both M12 and AnyVision, separately have issued statements regarding the same, and why it is in the best interest of both parties to terminate their relationship.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
“Following a thorough review, Covington has concluded its audit. Based on the evidence reviewed, Covington confirmed that AnyVision technology is used in border crossing checkpoints between Israel and the West Bank.”
Covington and Burling LLP
Microsoft’s withdrawal indicates an interesting stance by the tech companies. Are the top companies that deploy deep learning algorithms really against privacy invasion? And if so, do they not practice it in their products? Or is it just a deal gone wrong?
AnyVision especially has been very vocal about their support for both the US and Europe’s GDPR rules. Last year, when the US Senate introduced a bill against the commercial usage of facial recognition tech, AnyVision spoke in favor of the bill.
“We applaud the US government for taking a proactive role in regulating facial recognition technology,” said AnyVision in their press release.
AnyVision also claimed that their technology was already compliant with the proposals of the US Senate. They also assured that they do not capture images, and the captured data is rendered in the form of mathematical vectors that employ deep machine learning methods. And because of the cryptographic nature of these vectors, the risk of data theft is considered to be low as well.
Should Facial Recognition Be Banned?
The debate over commercial use of facial recognition technology is not new. Given that the technology is intrinsically Orwellian in nature, the deployment of facial recognition has received mixed reviews so far.
Countries like China had already been using this technology for particular purposes. The US primarily has been trying to regulate this technology, and establish an ecosystem where the technology could be used without exploitation of the public.
For example, city officials in San Francisco and Oakland have already banned police from using facial recognition technology.
In more recent news, today, Washington Governor Jay Inslee signed a landmark facial recognition legislation devoted exclusively to putting guardrails in place of the use of facial recognition technology.
As per the law, a state or local government agency can deploy facial recognition only if the technology provider makes available an application programming interface (API) to enable “legitimate, independent and reasonable tests” for “accuracy and unfair performance differences across distinct subpopulations.” It also states that the vendors are under an obligation to disclose any complaints regarding bias.
Once the information is digitized, it becomes easy to cross national borders. Perhaps it is cheaper to host in an overseas cloud, now that there is a jurisdictional issue. Now imagine that instead of the data being sold to the final user, it is stolen and resold. Who will be responsible now?
That said, in hindsight, everyone knows that this technology is going nowhere, and we can only make laws and policies to protect people. For instance, facial recognition can help identify Deep Fakes, which itself is a product of AI. A major use case of facial recognition technology can be used by law enforcement agencies to identify offenders in train stations, festivals and other public places.
Facial recognition for safety, the public and better governance look good on paper, but where does this end. Where do we draw the line and who decides it? But things get even worse as we look beyond the governance aspect. These deep learning models are infamously known as black box models. We do not know why few predictions are the way they are.
Moreover, there is a massive issue with the fairness of these algorithms. Since the models are only as good as the data, few communities are prone to exploitation. A wrongful conviction can jeopardize the life of an individual. Not that traditional law enforcement is flawless, but when technology is replacing humans, it should be done so to fill in the gaps and not make things worse.