Now Reading
Facebook’s Recent Mishap Is A Grim Reminder Of Big Tech’s Bias Problem

Facebook’s Recent Mishap Is A Grim Reminder Of Big Tech’s Bias Problem

  • The Facebook mishap is not a one-off incident. It is a fundamental problem that has been plaguing the AI and tech community at large.
Facebook

Facebook is embroiled in yet another embarrassing gaffe where its AI mislabelled a video of black men with a primates label. As per the report, users who watched a June 27 video posted by Daily Mail, a UK tabloid, received an auto-prompt asking if they wanted to continue watching the videos about primates. When this issue came to light, Facebook was quick to issue an apology, calling it an ‘unacceptable error’ and that the company is investigating the matter to prevent the behaviour from happening again. 

A Facebook spokesperson said, “As we have said, while we have made improvements to our AI, we know it’s not perfect, and we have more progress to make. We apologise to anyone who may have seen these offensive recommendations.” Currently, Facebook has disabled the entire topic recommendation feature.

Register for FREE Workshop on Data Engineering>>

AI System Gone Wrong

Facebook has one of the largest repositories of user-uploaded images on which it trains its object recognition algorithms. The content displayed to a user is often tailored to their browsing preferences and viewing habits; as part of this system, Facebook sometimes asks users whether they would like to continue seeing posts under certain categories. 

Other big techs, too, have been embroiled in controversies relating to racial bias. In 2015, Google Photos labelled Black people as ‘gorillas’. Like Facebook, Google was quick to issue an apology and said that it would be working to fix the issue immediately. However, a report showed that Google simply censored words like ‘gorilla’, ‘chimp’, ‘chimpanzee’, and ‘monkey’ instead of fixing the issue.

The Facebook or Google mishap is not a one-off incident. It is a fundamental problem that has been plaguing the AI and tech community at large. Research by the University of Maryland found that face detection services from big tech are severely flawed in easily detectable ways. Companies like Amazon, Google, and Microsoft are more likely to fail with older and darker-skinned people as compared to their younger and whiter counterparts. The study also revealed that this bias is not limited just to skin colour but extended to general physical appearance.

A paper by the University of Colorado showed facial recognition software systems from Amazon, Microsoft, and others could correctly identify cisgender men 95 per cent of the time but performed poorly when it came to trans people. Other studies have also shown that facial recognition technology is highly susceptible to a range of racial, ethnic, and gender biases.

See Also
Content moderation

Fundamental Problem

AI systems’ bias problem is hardly a revelation. Scientist Joy Boulamwini spoke about it in the Netflix documentary Coded Bias on how she had first-hand experience of such a flaw. As an MIT graduate student in 2015, she discovered that some facial analysis software couldn’t detect her dark skin until she wore a white mask. She said, “We can organise to protest this technology being used dangerously. When people’s lives, livelihoods, and dignity are on the line, AI must be developed and deployed with care and oversight.” Very soon, Boulamwini launched the Safe Face Pledge to raise awareness against the lethal use of and mitigate abuse of such technology.

Ousted Google AI researcher Timnit Gebru, too, has been vicariously speaking against big tech’s bias problem, including that of her former employer. Allegedly, Gebru was targeted when she used her position to raise concerns about race and gender inequality. She said, “If you look at who is getting prominence and paid to make decisions [about AI design and ethics] it is not black women . . . There were a number of people [at Google] who couldn’t stand me.”  

The underpinning point is that issues of AI bias affect the people who are rarely in the position to develop the technology. There is a gross underrepresentation of women and people of colour in technology. The undersampling of these groups in the data shapes AI, leading to a tech optimised only for a small population of the world (often white cis-men). Less than 5 per cent of the employees at Google and Facebook are black. The recent Facebook incident is a wake call for big tech to promote broader representation for the design and development of AI.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top