Commissioned by the UK government, during the 2018 October Budget session, this document analyses the impact of bias in algorithms that can represent a significant and imminent ethical threat. Based on the analysis, it presents several policy recommendations to the government and regulators.
This article tries to analyse some of these policy recommendations to the government, the reasons that led to formulating them, and if they can be adopted in the Indian context given the legal bodies and infrastructure.
Ensuring diversity for protection against bias
The CDEI document recognises the significance of diversity across a range of roles involved in the deployment and development of ADM systems. In order to ensure diversity, the CDEI recommends that the government should continue to support and invest in programmes that facilitate greater diversity.
This point is relevant especially in India with a wide range of demographics in terms of caste, religion, language, sexuality, and state, among others.
While India currently does not have a mechanism to ensure diversity in the development and deployment of ADM systems, the country has used reservation quotas to ensure the representation of historically and currently disadvantaged groups in government offices and education. Affirmative actions similar to the reservation system can be leveraged to promote more representation in tech firms, to achieve algorithmic fairness.
Setting up safe guidelines to monitor outcomes & analysing bias
Data is needed to monitor outcomes and identify bias, but access to characteristic data can become a tricky affair. The CDEI document calls for working with ‘relevant regulators’ for clear guidance on collection and use of protected characteristic data for monitoring outcomes of ADM systems. For bias evaluation, it recommends leveraging frameworks such as the Secure Research Service of the Office of National Statistics, that allows access to only accredited researchers.
On the other hand, the Data Security Council of India, a research body set by NASSCOM, has been committed to creating safe cyberspace by establishing best practices, standards, and initiatives. The body does extensive research on data protection frameworks, introduced in India, in the form of bills and committee reports. This expertise can be used to regulate and define robust guidelines to keep algorithms in check for biases.
Establishing laws to address the resulting discrimination
To address discrimination resulting from algorithmic bias, the CDEI does not believe in the need for a new specialised regulator or primary legislation for now. It, however, recommends more guidance that clarifies the ‘Equality Act responsibilities’ of organisations that use ADM systems, not only in terms of mitigating technical bias but also the collection of personal data.
This recommendation to not introduce new legislation is based on several instances that proved the current legislation to be efficient. For instance, a recent judgement in the courts successfully retracted an ADM system for facial recognition, deployed in the public sector, because enough steps were not taken to establish fairness in that system.
On the other hand, the extant law in India doesn’t account for fairness of ADM systems. While there is some legal framework to address who can use or process data, these rules were not made in consideration of the ADM systems. One of the legal frameworks is the Privacy Protection Bill 2019, which is however currently being examined and reviewed under a Joint Parliamentary Committee.
To address the issue of discrimination in general, the Constitution of India under Article 14 does have provisions for the ‘equality before law’. However, similar to the UK’s Equality Act, this Article lacks the language to address ‘equality’ in the context of ADM systems.
If the Data Protection Bill is passed with robust guidelines on processing the data along with more accountability on the entity that is processing it, it can then be combined with Article 14 to form a legal framework to address the resulting discrimination from algorithms.
Establishing mechanisms for transparency and explainability of ADM systems
The CDEI document states that the UK government has shown leadership in setting out guidance on AI usage in the public sector. However, it still calls for a mandatory transparency obligation on all public sector organisations using algorithms that have a ‘notable influence on significant decisions affecting individuals’.
To ensure more transparency in the public sector, the Government of India passed the Right To Information Act (RTI) in 2005, which has been used before to get more transparency on algorithms as well. However, experts have mentioned that there is a major lack of legal processes available to actually hold an algorithm accountable. In terms of explainability, NITI Aayog, India’s policy think-tank, introduced the concept of Explainable AI (XAI), to create a suite of machine learning techniques that produce more explainable models.
Achieving transparency and explainability for public sector algorithms needs more work over the current existing frameworks. Similar to the UK, laws need to be introduced to make public sector algorithms open. This can help industry experts analyse algorithms for its fairness and hold the governments accountable.
In the case of India, an oversight body was introduced by NITI Aayog to play an enabling role for the research and policy for AI in the country. While this article analyses mechanisms in the Indian system that can be leveraged to address algorithmic bias, a thorough review by this body will be important. This will help identify issues in an Indian context.