For the longest time, we have been concerned about automation without allocating enough worry to gatekeeping. Tech activists have been silently revolting against the use of biometric and facial recognition technology. But not anymore. The protests have been out and loud since early last year. A movement has begun and big tech companies are slowly but surely joining the league of becoming more ethically sound companies.
IBM announced last year that it would no longer use facial recognition products owing to the potential for abuse and misuse. Following that, Microsoft refused to sell its facial recognition data and tech to the police until the implementation of federal legislation.
Sign up for your weekly dose of what's up in emerging technology.
A little late to the party, in the first week of November this year, Facebook announced through a blog post that it was shutting down its Face Recognition System. Users who opted out of the feature will no longer be recognised in videos and photos by the company’s tech. Additionally, the social media giant deleted more than a billion people’s individual facial recognition templates.
However, the parent company Meta has a different plan altogether. Meta spokesperson Jason Grosse, in an interview with media outlet Recode, said that the company is still exploring ways to integrate biometric into its latest Metaverse business. Additionally, Meta also has plans to keep intact its photo-tagging facial recognition feature– DeepFace.
Ever since the Cambridge Analytica incident, Facebook has been the centre of controversies. However, the social media giant faced the greatest crisis ever after its internal documents were leaked to US regulators, lawmakers and reporters, regarding how Facebook’s executives were about the harm caused by the platform.
Earlier this year, Facebook AI introduced Casual Conservations— a new dataset to measure the strength of its AI models used across the four dimensions of age, gender, apparent skin type and lighting, and ultimately make its facial recognition models more inclusive. Despite these attempts, controversies seemed to follow Facebook.
Recently, the social media giant shook the world with the news of shutting down its facial recognition technology which it had launched in 2010 for its photo-tagging feature. It further went a step ahead to delete the faces of more than a billion users from its databases.
While the change was welcomed by tech advocates, many were of the opinion that it was an attempt by Facebook to shift the conversation from the controversy to this welcoming change.
Although the facial recognition technology will not be available on Meta’s other platforms like Instagram and its portal devices, this ‘reform’ does not apply to Metaverse. The company will use the technology in its virtual universe to explore its potential applications that stimulate positive use cases in the future while maintaining transparency, privacy and control.
Users that interact in the virtual reality (VR) world of Metaverse are likely to share biometric data, including eye movement, facial scans, pulse, voiceprints and even body tracking. These can pose a serious threat as far as privacy is concerned. And can users trust Meta with such personal data, again?
Facial recognition is considered to be one of the most dangerous technologies ever created. Despite knowing this, Facebook continued using the technology for more than a decade to suit its business model. Furthermore, the tech giant changed its parent company’s name to Meta, in what experts believe is an attempt to get rid of the controversies associated with the social media giant. However, this move by Meta and its social media platform is nothing short of a ‘hypocrisy’, and from the looks and feels of it, it seems like another failed attempt towards trying to become another ‘ethical’ tech company.