Algorithms run the modern world. The massive adoption of AI and ML technologies has necessitated the need to build ‘fairer’ algorithms. Over the years, we’ve seen AI models that discriminate against the elderly in insurance, facial recognition algorithms with racist undertows, AI recruiting tools biased against women, etc. The data from such problematic models are used for training other AI models, setting off a vicious cycle.
As of now, we lack a global standard framework to measure the fairness of AI/ML models. Researchers have proposed mathematical concepts like equal odds, positive predictive parity, and counterfactual fairness to build a universal standard assessment framework for fairness. However, most of these concepts are mutually exclusive.
Sign up for your weekly dose of what's up in emerging technology.
Most ML research is West-oriented: the data (eg, ImageNet), the structural injustices (eg, race and gender), the legal tenets (eg, equal opportunity), the measurement scales (eg, Fitzpatrick scale), and the enlightenment values only apply to the west. However, such models don’t generalise and fail to account for the diversity of countries outside Europe and the US.
National Digital Communications Policy-2018 mandates promoting AI research & development by creating a framework to test and certify AI products and services. In addition, the National Strategy for Artificial Intelligence #AIforAll and NITI Aayog’s Approach Documents for India have laid out broad ethical principles for the design, development, and deployment of AI in India and to provide a road map to encourage responsible AI adoption, with a view to build public trust in the use of these technologies.
To this end, the Telecommunication Engineering Centre (TEC), the technical arm of the Department of Telecommunications, Government of India, has initiated a consultation process to develop a framework that addresses various ethical, social and legal issues around AI and ML systems.
“We have been studying various aspects of AI/ ML where some standardisation or testing and certification framework could be established. Moreover, we have studied the works of various researchers where biases in various AI/ ML systems deployed by leading corporates and governments are deliberated. Biases in AI/ ML Systems are a real threat, and ensuring fairness in such applications is very important to build public trust in AI/ ML Systems. Accordingly, we have initiated discussions for evolving a framework for fairness certification of such systems,” said Avinash Agarwal, DDG (Convergence & Broadcasting), Telecommunication Engineering Centre.
TEC aims to set up standard operating procedures (SOP) to assess the fairness of various AI/ ML systems and create a benchmark. Systems that conform to the specifications (which can be checked via self-assessment or third-party audit) will be given a fairness certification, ensuring product credibility and public trust in AI/ ML.
“To achieve this, we will follow a consultative process for framing standards, specifications and test schedules. Then, we plan to prepare a draft document based on the various inputs received and release it for public consultations. We also plan to hold open house sessions to discuss the proposed Fairness Assessment Framework for AI/ ML Systems with domain experts and stakeholders,” Avinash added.
TEC’s initiative has generated a favourable response from across the verticals. The last date for submission is March 8, 2022.