MITB Banner

Watch More

Here’s How To Fight Prejudice In Artificial Intelligence

Artificial intelligence has been involved in different industries such as healthcare, finance, insurance as well as law enforcement, but it has its problems to overcome, and the one which is in the news almost every other day is the prejudice in artificial intelligence. In fact, Google Trends shows a 300% increase in interest for terms related to AI bias since 2016. AI bias occurs when a machine learning-based data analytics system discriminates against a particular group of people. 

There are many types of biases, but what is this prejudice in AI algorithms? And how did AI pick it up?

How Does This Prejudice Happen?

Prejudice in AI happens when social stereotypes heavily influence the training data used by the developers. We all know that AI models learn, but not by itself. Instead, the AI algorithms need specific data to learn, and the more the data, the better it is for the AI to learn. Generally, this idea of a large number of data sets becomes the only requirement, and the developers only concentrate on training the algorithm with the vast amount of the data without paying any attention to ‘how’ the AI system is using that data to learn. 

The societal bias/prejudice bias in AI systems is challenging to identify, as it is likely that one isn’t going to find anything wrong with it. Everything would seem to be alright with the algorithm, but the output might get something unexpected. For example, in 2014, Google Ads study showed that the ads related to high paying jobs reduced significantly when the gender was set to female’.

So, who is at fault here? It’s not the algorithm, instead, it’s the kind of data that has been used for the training without taking into consideration the outcomes of the algorithm. If Google monitors the activity of the advertisers while posting the ads, then this problem might not occur. This action of not controlling the advertisements result in creating prejudice.

Fighting The Prejudice In Artificial Intelligence

So when one is well aware of what kind of bias is being dealt with, the solutions are not far. In this case, the answer is not inside the algorithm, but inside the human behaviour. Not only does the algorithm behave abnormally because of the incomplete data, but it also gives out the output based on what the data scientist or the developer’s thoughts are aligned to. Like in August 2019, Google’s hate speech detector, which was created in 2016, was found to be biased against dark-skinned people.

Last year at Analytics India Magazine’s The Women In Analytics & AI Conference, Rising, Smitha Ganesh of Thoughtworks gave an extensive talk about the various types of biases and prejudices that AI holds and the ways to counter the different types.

Smitha Ganesh believes that these biases in AI algorithms are hard to fix, and the data scientists who come up with these algorithms should create them without having any bias in mind.

“Is it hard to fix? Yes, it is hard to fix, because there are a lot of unknowns that I need to be fixed and the bias introduction is not obvious,” says Smitha Ganesh.

During the talk, she gave a lot of examples about the various biases related to AI, such as Amazon’s hiring tool. Amazon had built a tool in 2014, which reviews the resumes with the aim of mechanising the search of top talent for their company, but in 2015, they found out that the system was not rating candidates in a gender-neutral way.

“Because women were not in good representation, they were penalising the women candidates. After they got to know that it was pronounced biased, the gender was taken out of the system.” Smitha says. 

She further detailed that even when the gender was taken out of the system, there were some proxies underlined in the system. The system was still biased.” ‘If the person was coming from a college which had women in the name of the college or if a person was in a club which was very women-specific, and is mentioned in their resumes, those queues were caught and still, the system was represented biased.”

Collaborating With Other Departments

Now when it comes to prejudices in algorithms, the first thing to keep in mind is that there must be a multi-disciplinary approach to it. A multi-disciplinary approach allows the training to be viewed not only from the aspect of technology experts but also from the human resource aspect. So that whatever training the algorithm is put through, its possible outcomes can be debated among them.

Monitoring The Type Of Data

People who know about the AI bias know about COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which is used in some states of the US by judges for their law system. This program was designed to predict the probability of the criminal to ‘re-offend’ based on the history of the criminal. But, what COMPAS did instead, courtesy of the type of data fed to it, it incorrectly predicted the people from African-American/dark-skinned community and termed them to be more likely to repeat the criminal offence. Now, this doesn’t become the problem of the algorithm, but a problem with ethics too.

Outlook

The prejudices in Artificial Intelligence systems are difficult to erase, just like the prejudices in human society. The companies should take responsibility for the AI algorithm bias because this technology has penetrated our real lives and is being used in various important aspects like HR, healthcare and justice. These departments have a direct connection with human lives and can have serious implications. The developers need to be neutral in thought and must consider the outcomes and the effect these AI systems can have on the daily lives of the public.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Sameer Balaganur

Sameer Balaganur

Sameer is an aspiring Content Writer. Occasionally writes poems, loves food and is head over heels with Basketball.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories