MITB Banner

Why Mitigating AI Biases Is The Need Of The Hour?

Share

Why Mitigating AI Biases Is The Need Of The Hour?

Illustration by Why Mitigating AI Biases Is The Need Of The Hour?

With artificial intelligence having an immense hold on human lives, supporting them to work, communicate, shop, as well as do their finances, people have started relying on this advanced technology to operate their lives. However, with increasing penetration of AI in critical areas like healthcare, legal, hiring, among others, brings in the concern for biases and fairness. And considering, AI models are created and trained by humans, they are bound to mimic biases and prejudices that humans carry. The more worrisome case is when these biased AI models are used to solve some of the critical societal problems, which results in skewed decisions.

In fact, in order to avoid that, in a recently published paper, Facebook researchers spoke about creating a framework for identifying gender biases in texts. According to the Facebook researchers, the framework understands how humans “socially construct and identify language” and takes into account biases from the person who’s speaking as; speaking too, and speaking about.

Alongside, in another published paper, it has been stated that NLP language models come with a lot of biases and fail to explain to whom these biases would be harmful. Therefore, the authors of the paper suggest NLP researchers to learn about social psychology and languages that would help them understand biases on race and gender. “Without this grounding, researchers and practitioners risk measuring or mitigating only what is convenient to measure or mitigate, rather than what is most normatively concerning,” stated in the paper.

Having said that, all these biases in AI models are results of training them with human-generated data, which in turn creates a model based on partial information. In fact, according to IBM research, there are more than 180 human biases in today’s AI systems, which can affect how business leaders make their decisions. Not only biased data will imply racial, gender and other preferences in business decisions but also create distrust in systems.

Whilst AI has continuously been scrutinised for its prejudices, it is one such tool that has helped businesses, healthcare providers as well as government leaders to fight against the novel COVID-19 pandemic. Therefore, it has become imperative to create transparent algorithms, policies on biases and explainable decision making.

Some of the common biases for AI models would include group attribution bias, out-group homogeneity bias, and selection-based bias, to name a few. In fact, some of the famous AI biases happened — when Google Photos classified black people as gorillas, when Google’s facial recognition wasn’t able to recognise people of colour, and when an education software showed discrimination against Guamanian students with their passing score.

Also Read: How Businesses Can Adopt Responsible AI Amid The Crisis

Why is it important to mitigate biases in the current situation?

The post-pandemic world will witness an influx in technological advancements, where AI would continue to play a critical role, and that’s why mitigating biases is essential now more than ever. Ranging from the financial sector to healthcare businesses, also in discovering drugs, AI has been a revolutionary technology in order to enhance the quality, create safety and reduce costs.

Rohini Srivathsa, the national technology officer at Microsoft India, believes that because of the widespread application of AI, it comes with unwanted complications, which brought in the urgency to think about responsible AI practices from the inception of products. In fact, Microsoft, in its announcement, stated the importance of creating new tools to build a more responsible and fairer AI system. “At Microsoft, this has been a multi-year journey for us in terms of thinking about responsible AI practices right from fairness to privacy to security and transparency,” said Srivathsa to the media.

Biases in AI models always have been a problem, but with more businesses relying on this advanced technology to carry out sensitive work amid crisis, a skewed AI system can lead to an adverse outcome for companies as well as for customers. And that is why it has become critical to ensure that AI systems do not possess discrimination while making decisions.

For instance, an AI model that is used in diagnosing critical diseases, if trained on a gender-biased data, would provide the wrong information to doctors, which in turn would put the patient’s life at risk. In fact, a recent study confirmed this by stating — training AI models with gender-skewed training data would lead to decreased performance of models when diagnosing diseases and other medical issues. Researchers also noted that the implications of biased AI models could be way adverse than what experts presently predict.

In fact, after the pandemic, AI would also be used to monitor citizens and employees, predict customers’ buying patterns, and perhaps even provide healthcare resources. Consequently, biased models would provide wrong information to people. Alongside, with reduced staff in the workplace, many businesses are relying on AI to screen applications for their hiring process, and therefore, with prejudiced AI models, companies might make a discriminated decision against certain people. 

“Businesses are being challenged as they have never been before,” said the CEO of Genpact, Tiger Tyagarajan, to the media. “In this unprecedented time, AI provides companies with a valuable tool to improve customer experience and mine data to engage with customers in a more personal, empathetic way. Our study suggests there is significant optimism shown by both consumers and employees if companies can demonstrate a responsible approach to AI. Business leaders must implement equitable training and fight AI bias.”

Although there might be beneficial uses of AI that have been helping businesses to sustain the post-pandemic world, biases in these systems could create harmful impacts. The discriminated results could easily be attributed to the prejudices of humans as well as the data that is involved in training models. Accordingly, any AI model deployed at sensitive areas would fundamentally require a check against algorithmic biases.

Here are the two ways businesses can mitigate biases in their AI models:

Train the AI model with clean and quality data sets

Training is the inception of any model, and a proper training with quality data sets, which are clean, is one of the critical ways to mitigate biases in AI models. If an AI model is free to get trained on data from a number of sources without any checks and regulation for biases, results would definitely portray prejudice and discrimination. Therefore, business leaders, developers as well as programmers involved in projects need to make sure that the training of the model is done on a diverse and representative data set that includes different races, genders as well groups and communities. 

In fact, according to experts, the quality of data directly affects the result of AI models. Therefore, developers should also check on the inaccurate or incomplete data sets, which, in turn, can de-brain AI models. Here, it is critical to choose properly labelled diverse data to achieve best and accurate results from AI systems. Although bringing in a diverse data set is a challenging task for developers, as it requires segmentation, grouping as well as managing, failing to do so could create adverse effects on businesses and its customers. In fact, Google open-sourced 16 of its datasets, including grasping dataset, noun and verb dataset, and open images dataset, that can help businesses in training their model.

Also Read: What Is The Best Way To Create Training Data For Machine Learning

Real-time monitoring of the models for a better check

Although the majority of the biases arise while training the AI models, however, many unintentional biases occur over time, and therefore it is required for developers to keep a real-time check of their AI systems. It is also necessary to test models in a regime that imitates the real-world for it to better work in environments that it is supposed to. 

For this, developers and programmers can choose real-world data sets that are available online to train their models, which in turn would help in developing models that would work beyond the controlled environment of testing. This is important, as AI is currently being used in critical areas like healthcare, judicial as well as the financial industry. 
If an AI model fails to create informed decisions based on real-world factors, it would, in turn, create consequences for the creators. Keeping a check on your AI models will not only keep biases off the grounds but also improve the accuracy and efficiency of models on real-time data. Unreliable insights from biased AI models can potentially damage the reputation of businesses and can negatively hamper customers. Therefore, the creators of AI models should keep a periodic check of their systems and should retain it with real-time datasets.

Share
Picture of Sejuti Das

Sejuti Das

Sejuti currently works as Associate Editor at Analytics India Magazine (AIM). Reach out at sejuti.das@analyticsindiamag.com
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.