In 2018, Microsoft launched face/emotion recognition features for their projects. A year later, in 2019, The American Civil Liberties Union (ACLU) was pressed forward with a lawsuit involving the facial recognition software offered by Microsoft to government clients.
A week before, in October 2019, the civil rights organisation sued the US Justice Department, the Federal Bureau of Investigation (FBI), and the Drug Enforcement Administration (DEA) for documents related to their use of surveillance technology. In a complaint filed in a Massachusetts federal court, the ACLU asked for various records from the government. This included inquiries to companies, meetings about the piloting or testing of facial recognition, voice recognition, and gait recognition technology, requests for proposals, and licensing agreements.
Recently, Microsoft acknowledged some of the criticisms and planned to remove those features from its artificial intelligence service for detecting, analysing and recognising faces. As a result, they will stop being available to new users this week and will be phased out for existing users within the year.
The changes are part of the release by Microsoft for tighter controls of its artificial intelligence products. After a two-year review, a team at Microsoft has developed a “Responsible AI Standard,” a 27-page document that sets out requirements for AI systems to ensure they will not harm society.
Existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases.