How bias creeps into large language models

According to DeepMind, unmodified LMs tend to assign high probabilities to exclusionary, biased, toxic, or sensitive utterances if such language is present in the training data.

Language models (LM) are optimised to mirror language systems. Therefore, it stands to reason that LMs might perpetuate stereotypes and biases hardwired into the natural language. When the training data is discriminatory, unfair, or toxic, optimisation leads to highly biased models.

Red flags

In 2019, researchers found racial bias in an algorithm used on over 200 million people in the US to predict the patients who need extra medical care. The system favoured whites over people of colour. The use of COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm in US court systems projected double false positives for recidivism in black offenders (45%) as opposed to whites (23%). Amazon has scrapped its AI recruitment tool for its manifest sexist bias against women

According to DeepMind, unmodified LMs tend to assign high probabilities to exclusionary, biased, toxic, or sensitive utterances if such language is present in the training data.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

The express purpose of language modelling is to represent language from the training corpus accurately. Therefore, it’s important to redact and curate training data, fine-tune LMs to adjust weightings to avoid bias and implement checks to filter harmful language.

At present, the turnaround time for language models– from research to applications– are relatively short, making it harder for third parties to anticipate and mitigate risks. Therefore, the correction course should start at the research level to address the bias in language models and should be improved on with each iteration. 

LMs should be evaluated against normative performance thresholds. But, to determine what constitutes satisfactory performance for the LM to be dubbed ‘sufficiently’ safe and ethical before deploying in the real world is a challenge in itself. 

Key risk areas

DeepMind mentions Discrimination, Exclusion and Toxicity as top risk areas in large-scale language models. LMs can engender discrimination, representational, and material harm by perpetuating social biases and stereotypes. For eg, the name ‘Max’ is used for a ‘male’ or ‘families’ always means a father, mother and child. If LMs pick up on biased social cues, they tend to deny or burden identities that differ.

LMs run the risk of disseminating false or misleading information. Examples include bad legal or medical advice leading to unethical or illegal actions. 

While interacting with conversational agents or chatbots, users tend to overestimate the capabilities of the AI and use it in unsafe ways. In addition, LM-based conversational agents might also compromise users’ private information.

LMs are used in social engineering to spread fake news, drive disinformation campaigns, create fraud or scams at scale etc.

Wrapping up

Inarguably, LMs have largely benefited the world economy. However, their benefits and risks are unevenly distributed. Ethical AI and responsible AI are increasingly becoming part of the tech narrative. However, a lot of work needs to be done to incorporate AI ethics into language models. It’s also important to not cut corners for faster turnaround time at the expense of responsible AI. Moreover, the focus should not be solely on building better models but on looking at existing models and devising ways to mitigate their biases. 

Meeta Ramnani
Meeta’s interest lies in finding out real practical applications of technology. At AIM, she writes stories that question the new inventions and the need to develop them. She believes that technology has and will continue to change the world very fast and that it is no more ‘cool’ to be ‘old-school’. If people don’t update themselves with the technology, they will surely be left behind.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR