MITB Banner

AIM long reads: What can India learn from the US & the EU when building a fairness framework for AI/ ML systems

Biases in AI/ ML Systems are a real threat, and ensuring fairness in such applications is very important to build public trust in AI/ ML Systems.

The National Digital Communications Policy, 2018 has outlined the role of DoT (Department of Telecommunication) in driving exponential technologies such as 5G, AI, Robotics, IoT, Cloud Computing and M2M. NDCP 2018 also mandates promoting research & development by creating a framework for testing and certification of new products and services. As per Telecommunication Engineering Centre (TEC), the Fairness Assessment Framework for AI/ ML Systems is in alignment with the NDCP policy.

“We have been studying various aspects of AI/ ML where some standardisation or testing and certification framework could be established. Moreover, we have studied the works of various researchers where biases in various AI/ ML systems deployed by leading corporates and governments are deliberated. Biases in AI/ ML Systems are a real threat, and ensuring fairness in such applications is very important to build public trust in AI/ ML Systems. Accordingly, we have initiated discussions for evolving a framework for fairness certification of such systems,” said Avinash Agarwal, DDG (Convergence & Broadcasting), Telecommunication Engineering Centre, DoT.

What’s cooking?

In March, Avinash, along with his team members Harsh Agarwal & Nihaarika Agarwal, published a paper titled “Fairness Score and process standardisation: framework for fairness certification in artificial intelligence systems.”

“Different users might use different metrics to check the fairness of an AI system. Hence, it is crucial to standardise the bias measurement on a linear scale so that a uniform scale can assess fairness and enable the comparison of different AI systems. Therefore, we introduce Bias Index for each protected attribute and Fairness Score for the overall system as the standard benchmarks for measuring fairness,” the research paper noted.

Additionally, TEC has started a consultative process for framing standards, specifications and test schedules. The end goal is to prepare a draft document based on the inputs and release it for public consultations. TEC also plans to hold open house sessions with domain experts and stakeholders to discuss the Fairness Assessment Framework for AI/ ML Systems. “We intend to frame standard operating procedures (SOP) for assessing the fairness of various AI/ ML Systems and create benchmarks for their comparison. We expect to standardise the assessment process so that various AI/ ML Systems can be assessed – either self-assessment or third-party audit. A Fairness Certification, which the developers may voluntarily request, will give credibility to their AI/ ML products and help build public trust in AI/ ML,” Avinash said.

The need of the hour is to establish a system of governance to ensure that we can reap the rewards of AI while containing the risks.

Industry leaders chime in

We have sounded out the industry leaders to understand the importance of fairness and ethical frameworks in organisations.

Biswajit Biswas, Chief Data Scientist, Tata Elxsi

“We have designed a set of processes as a part of the Quality Management System, which is practised for developing AI solutions. It essentially stipulates the data governance, data privacy, type of training required, model development and deployment. We have developed multiple frameworks to manage our core AI works. We have frameworks for cognitive video processing, text and language processing, and a self-learning system. We also have built a data lake framework for large scale data aggregation for both streaming and batch processing. These frameworks greatly enhance our ability to scale AI for many customer programs we are working on and have become a key part of their digital transformation journey.”

Vijay Krishnan, Co-Founder & CTO at Turing

While AI today (esp. Deep AI) does its own feature selection and modelling, these steps work towards the goals set by humans using the data collected for human purposes. If the data is biased, it can amplify injustice even by accident. Discrimination against a sub-population can be created unintentionally, and that’s the fairness issue at the core of AI ethics. The responsibility of ensuring this does not happen lies on the teams before deploying any model to test for such bias.

Before answering how one can handle this issue at scale, it is crucial to understand how the problems originate and manifest:

  • Many tools and libraries make creating and applying the models very easy. The business pressures of quickly launching features can make the developers overlook the possibility of biases.
  • Some models typically get linked and form a chain where an output of one is used by the other. In this case, the bias propagates and amplifies. The developers working towards the chain may not have a view of the bias of the input data.
  • Biases happen at every layer from presentation to data, data to model, and model to user interaction via algorithm. So this is not just a data cleaning/governance issue.

Handling this issue at scale requires:

  • Creating awareness at the developer level about bias
  • Catch possible biases at exploratory data analysis
  • Validate models on various data sets and compare the behaviour 
  • Trying to use models which are explainable, auditable, and transparent.

“The most important element of any  AI-based solution is the underlying data used. An AI Algorithm (which is in the form of a binary model) produced as output is nothing but the reflection of the data itself. So, naturally, there has to be a cardinal focus on data – how it is collected, pre-processed, analysed and used for AI model generation. As AI leaders, we need to constantly assess the data collection process, look for potential bias, whether data is collected from a manual process or automated process and so on, type of applications which are used at the input side, devices used, demography and location, the impact of the digital divide.”

Ankit Gupta, Head of Product and Engineering, CultFit

“Privacy is a huge concern in our field, and as a health company, we consider it our duty to ensure that best practices are followed to maintain that aspect of data privacy. Some of the ways we do this are: 

  1. We anonymise all user data before engineers and data scientists can look at them.
  2. Access is typically temporary and, on a need-to-know basis. We have a stringent approval process in place, and only those who need to know certain information, such as doctors for consultations, get access to that information – no one else does
  3. Most internal tools don’t show personally identifiable information (PII) for the customer. Any request to such data in exceptional cases is logged and available for audit.

EU leads the race

Europe has a data protection regulation (GDPR) in effect since May 2018. The Framework for Trustworthy AI is built on four ethical pillars and stipulates AI should be lawful, ethical, and robust. The guidelines do not explicitly deal with the first component of Trustworthy AI (lawful AI), but instead, aim to offer guidance on fostering and securing the second and third components (ethical and robust AI). 

The four ethical pillars based on fundamental rights (dignity, freedoms, equality and solidarity, citizens’ rights, and justice) yield a total of seven key principles.

The Expert Group (HLEG AI) noted:  “In a context of rapid technological change, we believe it is essential that trust remains the bedrock of societies, communities, economies, and sustainable development. We, therefore, identify Trustworthy AI as our foundational ambition, since human beings and communities will only be able to have confidence in the technology’s development and its applications when a clear and comprehensive framework for achieving its trustworthiness is in place.”

The second act

HLEG AI has been set up to analyse AI ethics from all perspectives (economic and social) and served as the foundation for the new regulation announced in April 2021.

The regulation prohibits unacceptable risks, i.e. applications that endanger people’s safety or lives, as well as their rights (for example, a voice assistant that encourages violence). Same applies to high-risk applications, including critical infrastructures, education, employment, essential public and private services, law enforcement, and justice administration. 

Third, applications deemed to be of low risk must have transparency, i.e. the user must be aware of the criteria used to make decisions. This category could include chatbots, in which the user must be informed that they are not conversing with a human. Those with low risks, such as video games or spam filters, have fewer requirements. Europe is setting a precedent for other countries to follow suit.

EU & US

As part of the National AI Initiative, the National Institute of Standards and Technology is developing an Artificial Intelligence Risk Management Framework (AI RMF) for the US.

During the inaugural Trade and Technology Council (TTC) meeting in Pittsburgh on September 29, 2021, the United States and the European Union agreed on a set of “common principles” concerning AI. They are committed to developing and implementing AI systems that are innovative and trustworthy while respecting universal human rights and shared democratic values, to exploring cooperation on AI technologies designed to improve privacy protections, and to conducting an economic study on the impact of AI on the future of our workforces. The TTC has deployed a dedicated working group for combating the misuse of technology to endanger security and human rights, as well as advancing the development of trustworthy AI.

The United States and the European Union have expressed their willingness and intention to develop and implement trustworthy AI, as well as their commitment to a human-centred approach that reinforces shared democratic values and respects universal human rights, which they have already demonstrated by endorsing the OECD [Organisation for Economic Co-operation and Development] Recommendation on AI.

The OECD recommended the government to facilitate public and private investment in R&D to spur innovation in Trustworthy AI, foster accessible AI ecosystems with digital infrastructure and technologies, and mechanisms to share data and knowledge, ensure a policy environment to support the deployment of Trustworthy AI systems, empower people with AI skills and support workers for a fair transition, and encourage cross-border collaborations.

Biden’s AI Bill of Rights

The Biden administration has announced an AI Bill of Rights initiative. The Director and Deputy Director of the White House Office of Science and Technology Policy (OSTP) said it would develop a “bill of rights” to protect against the potentially harmful consequences of AI. The bill covers “your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties; your freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets; your freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace; and your right to meaningful recourse if the use of an algorithm harms you.”

Check out our series on Ethical AI.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Sri Krishna

Sri Krishna

Sri Krishna is a technology enthusiast with a professional background in journalism. He believes in writing on subjects that evoke a thought process towards a better world. When not writing, he indulges his passion for automobiles and poetry.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories