Will This New Alliance Of Big Tech Firms To Combat Algorithmic Bias Work?

The Alliance’s Algorithmic Bias Safeguards comprise 55 questions in 13 categories that the HR teams of companies can use to evaluate vendors.

Just this year, Twitter users were taken by surprise to find that the photo-cropping algorithm used by the popular app was racist. While the app realised the discrepancies early on, disabled the feature and launched a bug bounty program to mitigate the AI bias, the problem has been prominent for years now. Unfortunately, it is not just Twitter; AI bias can be seen in most algorithms used by big and small-scale companies, with some consequences that are worse than others. 

Big tech companies have been called out in the past for their unfair machines. For example, Amazon has been tagged for their sexist recruiting system, the US’ COMPAS system for identifying criminals likely to be recommitted was revealed to be racist, Facebook’s Ad algorithm was racist, and even the algorithm used by the US healthcare to allocate 200 million patients to care was racist. With the increasing use of AI in modern machines, technologies and services, it is critical to ensure that the models are fair and ethical. 

The Algorithmic Bias Safeguards for Workforce


Sign up for your weekly dose of what's up in emerging technology.

Ethical AI is a work in progress, especially since algorithms are black boxes trained on biased human data. But in one of the major steps taken to combat the challenge, some of the biggest technological corporations have teamed up with The Data & Trust Alliance in an adjoined effort to prevent technological discrimination. 

New York-based not for profit, Data and Trust Alliance has signed up major employers, including General Motors, Nike, CVS Health, Deloitte, Humana, IBM, Mastercard, Meta and Walmart. Unlike user speculations, the group has rejected being a think tank or a lobby and reaffirmed to be but a group that has collectively developed an evaluation and scoring system for AI softwares. 

Member companies of The Data & Trust Alliance

Source: Overview of ‘Algorithmic Bias Safeguards for Workforce

Scoring System For Vendor Evaluation

Most big tech companies leverage AI systems from third-party suppliers according to the point of needs of the HR teams. These algorithms are created by software companies using their datasets. Unfortunately, corporate companies have little to no understanding of what went into the algorithms, how they functioned, or how ethical they are. This often leads to the AI models being biased and discriminatory, posing a security and ethical risk to the company and their users. 

The Data & Trust Alliance addresses the potential dangers of using such powerful algorithm workforce decisions in hiring, promotion, training and compensation, especially when made by big tech companies that can affect several hundreds of people. To alleviate the problem and raise red flags for companies to understand it, this group was brought together by former chief executives of American Express and IBM in 2020. 

‘The Algorithmic Bias Safeguards for Workforce’ initiative was designed for HR teams to evaluate third party vendors. The parameters of the initiative will support HR in detecting, mitigating and monitoring algorithmic bias in workforce decisions. 

The Alliance’s Algorithmic Bias Safeguards comprise 55 questions in 13 categories that the HR teams of companies can use to evaluate vendors. The questions touch up-on on criteria such as training data, model design, deployment and monitoring methods, bias testing methods, bias remediation, compliance to standards, governance, education, transparency and accountability, and AI ethics and diversity commitments. 

These questions cover concerns regarding bias in the training dataset, ensuring thorough testing to catch algorithm bias, assuring the model meets the standards of governance and the correct use of the system. This will help in increasing transparency and trust between vendors and companies, and indirectly, assuring trust between companies and their consumers when the right product is being delivered. It is important to attain this cycle of trust to safeguard a healthy future with AI. 

Set of questions to evaluate HR Vendors

Source: Overview of ‘Algorithmic Bias Safeguards for Workforce

The system has been created with input from more than 200 experts from 15 industries across human resources, data analysis, legal and procurement. It has also involved software vendors and outside experts. Additionally, they have lent the expertise of more than 65 contributors from academia, government, and civil society. 

“The initiative challenged us to proactively consider how to unlock the appropriate and responsible use of AI within our organisation. The evaluation provides a usable framework that helps us feel confident that our use of AI does not unintentionally undermine our broader goals,” said Kat Robison, Associate General Counsel of Global Privacy & Security at Nike. 

Primers accompany the criteria for HR evaluators to educate themselves on the metrics. It also provides guidance on assessing vendor responses, flagging vendors in red, yellow or green. 

Pre-Conditions To Improving The State of The Art

“For communication about responsible data practices, we must first have a shared language. Then, education, especially in such a dynamic field. Next, transparency of vendor practices. And only then can we all collaborate to improve state of the art,” said Dr Michael Capps, CEO of Diveplane Corporation and Co-chair of the Leadership Council members of the Data & Trust Alliance. 

The Center for Global Enterprise is the parent NGO of The Data & Trust Alliance. The NGO has taken up several initiatives across countries related to the digital future, supply chain, cyberspace, governance, women upliftment and business. Through this action, the company is helping the global tech community take a step into a fairer and safer digital future. Data, machines and algorithms are inhabiting our actions and the world today; it is our sole responsibility to protect the people from a dystopian future led by incorrect algorithms.

More Great AIM Stories

Avi Gopani
Avi Gopani is a technology journalist that seeks to analyse industry trends and developments from an interdisciplinary perspective at Analytics India Magazine. Her articles chronicle cultural, political and social stories that are curated with a focus on the evolving technologies of artificial intelligence and data analytics.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM