MITB Banner

Facebook’s Recent Mishap Is A Grim Reminder Of Big Tech’s Bias Problem

The Facebook mishap is not a one-off incident. It is a fundamental problem that has been plaguing the AI and tech community at large.

Share

Facebook

Facebook is embroiled in yet another embarrassing gaffe where its AI mislabelled a video of black men with a primates label. As per the report, users who watched a June 27 video posted by Daily Mail, a UK tabloid, received an auto-prompt asking if they wanted to continue watching the videos about primates. When this issue came to light, Facebook was quick to issue an apology, calling it an ‘unacceptable error’ and that the company is investigating the matter to prevent the behaviour from happening again. 

A Facebook spokesperson said, “As we have said, while we have made improvements to our AI, we know it’s not perfect, and we have more progress to make. We apologise to anyone who may have seen these offensive recommendations.” Currently, Facebook has disabled the entire topic recommendation feature.

AI System Gone Wrong

Facebook has one of the largest repositories of user-uploaded images on which it trains its object recognition algorithms. The content displayed to a user is often tailored to their browsing preferences and viewing habits; as part of this system, Facebook sometimes asks users whether they would like to continue seeing posts under certain categories. 

Other big techs, too, have been embroiled in controversies relating to racial bias. In 2015, Google Photos labelled Black people as ‘gorillas’. Like Facebook, Google was quick to issue an apology and said that it would be working to fix the issue immediately. However, a report showed that Google simply censored words like ‘gorilla’, ‘chimp’, ‘chimpanzee’, and ‘monkey’ instead of fixing the issue.

The Facebook or Google mishap is not a one-off incident. It is a fundamental problem that has been plaguing the AI and tech community at large. Research by the University of Maryland found that face detection services from big tech are severely flawed in easily detectable ways. Companies like Amazon, Google, and Microsoft are more likely to fail with older and darker-skinned people as compared to their younger and whiter counterparts. The study also revealed that this bias is not limited just to skin colour but extended to general physical appearance.

A paper by the University of Colorado showed facial recognition software systems from Amazon, Microsoft, and others could correctly identify cisgender men 95 per cent of the time but performed poorly when it came to trans people. Other studies have also shown that facial recognition technology is highly susceptible to a range of racial, ethnic, and gender biases.

Fundamental Problem

AI systems’ bias problem is hardly a revelation. Scientist Joy Boulamwini spoke about it in the Netflix documentary Coded Bias on how she had first-hand experience of such a flaw. As an MIT graduate student in 2015, she discovered that some facial analysis software couldn’t detect her dark skin until she wore a white mask. She said, “We can organise to protest this technology being used dangerously. When people’s lives, livelihoods, and dignity are on the line, AI must be developed and deployed with care and oversight.” Very soon, Boulamwini launched the Safe Face Pledge to raise awareness against the lethal use of and mitigate abuse of such technology.

Ousted Google AI researcher Timnit Gebru, too, has been vicariously speaking against big tech’s bias problem, including that of her former employer. Allegedly, Gebru was targeted when she used her position to raise concerns about race and gender inequality. She said, “If you look at who is getting prominence and paid to make decisions [about AI design and ethics] it is not black women . . . There were a number of people [at Google] who couldn’t stand me.”  

The underpinning point is that issues of AI bias affect the people who are rarely in the position to develop the technology. There is a gross underrepresentation of women and people of colour in technology. The undersampling of these groups in the data shapes AI, leading to a tech optimised only for a small population of the world (often white cis-men). Less than 5 per cent of the employees at Google and Facebook are black. The recent Facebook incident is a wake call for big tech to promote broader representation for the design and development of AI.

Share
Picture of Shraddha Goled

Shraddha Goled

I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India