AIM Banners_978 x 90

How bias creeps into large language models

According to DeepMind, unmodified LMs tend to assign high probabilities to exclusionary, biased, toxic, or sensitive utterances if such language is present in the training data.
Language models (LM) are optimised to mirror language systems. Therefore, it stands to reason that LMs might perpetuate stereotypes and biases hardwired into the natural language. When the training data is discriminatory, unfair, or toxic, optimisation leads to highly biased models. Red flags In 2019, researchers found racial bias in an algorithm used on over 200 million people in the US to predict the patients who need extra medical care. The system favoured whites over people of colour. The use of COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm in US court systems projected double false positives for recidivism in black offenders (45%) as opposed to whites (23%). Amazon has scrapped its AI recruitment tool for its manifest sexist bias against
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Meeta Ramnani
Meeta Ramnani
Meeta’s interest lies in finding out real practical applications of technology. At AIM, she writes stories that question the new inventions and the need to develop them. She believes that technology has and will continue to change the world very fast and that it is no more ‘cool’ to be ‘old-school’. If people don’t update themselves with the technology, they will surely be left behind.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed