Facebook’s Recent Mishap Is A Grim Reminder Of Big Tech’s Bias Problem

The Facebook mishap is not a one-off incident. It is a fundamental problem that has been plaguing the AI and tech community at large.
Facebook

Facebook is embroiled in yet another embarrassing gaffe where its AI mislabelled a video of black men with a primates label. As per the report, users who watched a June 27 video posted by Daily Mail, a UK tabloid, received an auto-prompt asking if they wanted to continue watching the videos about primates. When this issue came to light, Facebook was quick to issue an apology, calling it an ‘unacceptable error’ and that the company is investigating the matter to prevent the behaviour from happening again. 

A Facebook spokesperson said, “As we have said, while we have made improvements to our AI, we know it’s not perfect, and we have more progress to make. We apologise to anyone who may have seen these offensive recommendations.” Currently, Facebook has disabled the entire topic recommendation feature.

AI System Gone Wrong

Facebook has one of the largest repositories of user-uploaded images on which it trains its object recognition algorithms. The content displayed to a user is often tailored to their browsing preferences and viewing habits; as part of this system, Facebook sometimes asks users whether they would like to continue seeing posts under certain categories. 

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Other big techs, too, have been embroiled in controversies relating to racial bias. In 2015, Google Photos labelled Black people as ‘gorillas’. Like Facebook, Google was quick to issue an apology and said that it would be working to fix the issue immediately. However, a report showed that Google simply censored words like ‘gorilla’, ‘chimp’, ‘chimpanzee’, and ‘monkey’ instead of fixing the issue.




The Facebook or Google mishap is not a one-off incident. It is a fundamental problem that has been plaguing the AI and tech community at large. Research by the University of Maryland found that face detection services from big tech are severely flawed in easily detectable ways. Companies like Amazon, Google, and Microsoft are more likely to fail with older and darker-skinned people as compared to their younger and whiter counterparts. The study also revealed that this bias is not limited just to skin colour but extended to general physical appearance.

A paper by the University of Colorado showed facial recognition software systems from Amazon, Microsoft, and others could correctly identify cisgender men 95 per cent of the time but performed poorly when it came to trans people. Other studies have also shown that facial recognition technology is highly susceptible to a range of racial, ethnic, and gender biases.

Fundamental Problem

AI systems’ bias problem is hardly a revelation. Scientist Joy Boulamwini spoke about it in the Netflix documentary Coded Bias on how she had first-hand experience of such a flaw. As an MIT graduate student in 2015, she discovered that some facial analysis software couldn’t detect her dark skin until she wore a white mask. She said, “We can organise to protest this technology being used dangerously. When people’s lives, livelihoods, and dignity are on the line, AI must be developed and deployed with care and oversight.” Very soon, Boulamwini launched the Safe Face Pledge to raise awareness against the lethal use of and mitigate abuse of such technology.

Ousted Google AI researcher Timnit Gebru, too, has been vicariously speaking against big tech’s bias problem, including that of her former employer. Allegedly, Gebru was targeted when she used her position to raise concerns about race and gender inequality. She said, “If you look at who is getting prominence and paid to make decisions [about AI design and ethics] it is not black women . . . There were a number of people [at Google] who couldn’t stand me.”  

The underpinning point is that issues of AI bias affect the people who are rarely in the position to develop the technology. There is a gross underrepresentation of women and people of colour in technology. The undersampling of these groups in the data shapes AI, leading to a tech optimised only for a small population of the world (often white cis-men). Less than 5 per cent of the employees at Google and Facebook are black. The recent Facebook incident is a wake call for big tech to promote broader representation for the design and development of AI.

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.

Download our Mobile App

MachineHack

AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.