Queen’s University Belfast & IIT Madras Research Team Develops Technology To Make AI Fairer

Queen’s University Belfast & IIT Madras Research Team Develops Technology To Make Artificial Intelligence Fairer

While India has been struggling with a host of social evils and discrimination based on caste, creed, gender and religion, an Indian researcher has developed a new algorithm that will help make artificial intelligence less biased while processing data.

The students of Indian Institute of Technology Madras were a part of an international research project, which was led by a Queen’s University Belfast Researcher in the U.K. who has developed an innovative new algorithm to make artificial intelligence (AI) fairer and less biased when processing data.

Dr Deepak Padmanabhan, Researcher at Queen’s University Belfast and an Adjunct Faculty Member at IIT Madras, has been leading a project to tackle the discrimination problem within clustering algorithms.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Companies often use AI technologies to sift through vast amounts of data in situations such as an oversubscribed job vacancy or in policing when there is a large volume of CCTV data linked to a crime.

AI sorts through the data, grouping it to form a manageable number of clusters, which are groups of data with common characteristics. It is then much more comfortable for an organisation to analyse manually and either shortlist or reject the entire group. However, while AI can save on time, the process is often biased in terms of race, gender, age, religion and country of origin.

Elaborating on this research, Dr Deepak Padmanabhan said, “AI techniques for exploratory data analysis, known as ‘clustering algorithms’, are often criticised as being biased in terms of ‘sensitive attributes’ such as race, gender, age, religion and country of origin. AI techniques must be fair while aiding shortlisting decisions to ensure that they are not discriminatory on such attributes.”

It has been reported that white-sounding names received 50% more call-backs than those with black-sounding names. Studies also suggest that call-back rates tend to fall substantially for workers in their 40s and beyond. Another discriminatory trend is the ‘motherhood penalty’, where working mothers are disadvantaged in the job market while working fathers do better, in what is known as the ‘fatherhood bonus’.

Over the last few years, ‘fair clustering’ techniques have been developed, and these prevent bias in a single chosen attribute, such as gender. The research team has now developed a method that, for the first time, can achieve fairness in many attributes.

Speaking about this research, Ms Savitha Abraham, PhD Student, Department of Computer Science and Engineering at IIT Madras, said, “Fairness in AI techniques is of significance in developing countries such as India. These countries experience drastic social and economic disparities, and these are reflected in the data.”

Ms Savitha Abraham added, “Employing AI techniques directly on raw data results in biased insights, which influence public policy, and this could amplify existing disparities. The uptake of fairer AI methods is critical, especially in the public sector, when it comes to such scenarios.”

Highlighting the potential impact of this research, Dr Padmanabhan said, “Our fair clustering algorithm, called FairKM, can be invoked with any number of specified sensitive attributes, leading to a much fairer process. In a way, FairKM takes a significant step towards algorithms assuming the role of ensuring fairness in shortlisting, especially in terms of human resources. With a fairer process in place, the selection committees can focus on other core job-related criteria.”

Dr Padmanabhan further added, “FairKM can be applied across several data scenarios where AI is being used to aid decision makings, such as pro-active policing for crime prevention and detection of suspicious activities. This, we believe, marks a significant step forward towards building fair machine learning algorithms that can deal with the demands of our modern democratic society.”

Sejuti Das
Sejuti currently works as Associate Editor at Analytics India Magazine (AIM). Reach out at sejuti.das@analyticsindiamag.com

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR