Advertisement

Active Hackathon

Queen’s University Belfast & IIT Madras Research Team Develops Technology To Make AI Fairer

Queen’s University Belfast & IIT Madras Research Team Develops Technology To Make Artificial Intelligence Fairer

While India has been struggling with a host of social evils and discrimination based on caste, creed, gender and religion, an Indian researcher has developed a new algorithm that will help make artificial intelligence less biased while processing data.

The students of Indian Institute of Technology Madras were a part of an international research project, which was led by a Queen’s University Belfast Researcher in the U.K. who has developed an innovative new algorithm to make artificial intelligence (AI) fairer and less biased when processing data.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Dr Deepak Padmanabhan, Researcher at Queen’s University Belfast and an Adjunct Faculty Member at IIT Madras, has been leading a project to tackle the discrimination problem within clustering algorithms.

Companies often use AI technologies to sift through vast amounts of data in situations such as an oversubscribed job vacancy or in policing when there is a large volume of CCTV data linked to a crime.

AI sorts through the data, grouping it to form a manageable number of clusters, which are groups of data with common characteristics. It is then much more comfortable for an organisation to analyse manually and either shortlist or reject the entire group. However, while AI can save on time, the process is often biased in terms of race, gender, age, religion and country of origin.

Elaborating on this research, Dr Deepak Padmanabhan said, “AI techniques for exploratory data analysis, known as ‘clustering algorithms’, are often criticised as being biased in terms of ‘sensitive attributes’ such as race, gender, age, religion and country of origin. AI techniques must be fair while aiding shortlisting decisions to ensure that they are not discriminatory on such attributes.”

It has been reported that white-sounding names received 50% more call-backs than those with black-sounding names. Studies also suggest that call-back rates tend to fall substantially for workers in their 40s and beyond. Another discriminatory trend is the ‘motherhood penalty’, where working mothers are disadvantaged in the job market while working fathers do better, in what is known as the ‘fatherhood bonus’.

Over the last few years, ‘fair clustering’ techniques have been developed, and these prevent bias in a single chosen attribute, such as gender. The research team has now developed a method that, for the first time, can achieve fairness in many attributes.

Speaking about this research, Ms Savitha Abraham, PhD Student, Department of Computer Science and Engineering at IIT Madras, said, “Fairness in AI techniques is of significance in developing countries such as India. These countries experience drastic social and economic disparities, and these are reflected in the data.”

Ms Savitha Abraham added, “Employing AI techniques directly on raw data results in biased insights, which influence public policy, and this could amplify existing disparities. The uptake of fairer AI methods is critical, especially in the public sector, when it comes to such scenarios.”

Highlighting the potential impact of this research, Dr Padmanabhan said, “Our fair clustering algorithm, called FairKM, can be invoked with any number of specified sensitive attributes, leading to a much fairer process. In a way, FairKM takes a significant step towards algorithms assuming the role of ensuring fairness in shortlisting, especially in terms of human resources. With a fairer process in place, the selection committees can focus on other core job-related criteria.”

Dr Padmanabhan further added, “FairKM can be applied across several data scenarios where AI is being used to aid decision makings, such as pro-active policing for crime prevention and detection of suspicious activities. This, we believe, marks a significant step forward towards building fair machine learning algorithms that can deal with the demands of our modern democratic society.”

More Great AIM Stories

Sejuti Das
Sejuti currently works as Associate Editor at Analytics India Magazine (AIM). Reach out at sejuti.das@analyticsindiamag.com

Our Upcoming Events

Conference, Virtual
Genpact Analytics Career Day
3rd Sep

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

Council Post: How to Evolve with Changing Workforce

The demand for digital roles is growing rapidly, and scouting for talent is becoming more and more difficult. If organisations do not change their ways to adapt and alter their strategy, it could have a significant business impact.

All Tech Giants: On your Mark, Get Set – Slow!

In September 2021, the FTC published a report on M&As of five top companies in the US that have escaped the antitrust laws. These were Alphabet/Google, Amazon, Apple, Facebook, and Microsoft.

The Digital Transformation Journey of Vedanta

In the current digital ecosystem, the evolving technologies can be seen both as an opportunity to gain new insights as well as a disruption by others, says Vineet Jaiswal, chief digital and technology officer at Vedanta Resources Limited

BlenderBot — Public, Yet Not Too Public

As a footnote, Meta cites access will be granted to academic researchers and people affiliated to government organisations, civil society groups, academia and global industry research labs.