This is the 13th article in the weekly series of Expert’s Opinion, where we talk to academics who study AI or other emerging technologies and their impact on society and the world.
This week, we spoke to Kazim Rizvi, Founding Director at The Dialogue. The Dialogue is a leading voice in India’s policy ecosystem, with a focus on technology, energy, strategic affairs and development studies. The Think-Tank and Civil Societies Programme picked The Dialogue as one of the world’s Top 10 think-tanks to watch out for in 2020.
The NITI Aayog published a Working Document: Towards Responsible #AIforAll, soliciting feedback from all the stakeholders. Close on the heels of the document release, The Dialogue came up with a report to assist NITI Aayog in its efforts to develop and deploy ethical AI.
Analytics India Magazine caught up with Rizvi to get his analysis on the working document.
AIM: To start off, could you summarise what India’s national policy for AI lacks and what needs to be changed?
Rizvi: Realising the significance of AI in transforming societies and businesses alike, the government of India has shown utmost commitment towards building a robust AI infrastructure in India under the tagline #ResponsibleAIForAll’. While the government’s initiative to encourage the growth of emerging technologies is laudatory, it is important that these policies recognise the realities of the existing socioeconomic inequities in our society and address them efficiently to ensure equitable development and use of this technology. AI relies on the collection and analysis of data. However, existing data sets in India, whether for labour markets or health systems, are mostly fragmented and unrepresentative. Further, there are large digital divides between urban and rural areas, developed and underdeveloped states and also between men and women. Algorithms based on existing data sets are thus bound to have a distorted picture of the social realities. Further, the issue of data privacy in the absence of robust data protection legislation adds to the concern. Though the existing drafts of the PDP bill rest on the idea of informed consent, this seems far from adequate, given the low literacy levels and education of a large section of the Indian population.
To adequately address these issues and to truly ensure the designing and deployment of a robust and comprehensive AI framework by addressing the digital divide, it is paramount to increase the focus on infrastructural development for research and innovation of AI in the underdeveloped parts of the country. Similarly, it is also important to ensure adequate representation of the marginalised communities in the data sets collected for modelling and training algorithms while also increasing the frequency of independent audits to determine the sociological impact of this technology. These measures coupled with the inclusion of policymakers and social scientists in the AI ethics committee that is being envisioned by the NITI Aayog along with the enactment of robust data protection legislation will be crucial for the fair and equitable dissemination of this technology for the benefit of all.
AIM: What are the mechanisms and frameworks India should have in place to ensure the AI framework is consistent with our Constitutional values?
Rizvi: The NITI Aayog’s #ResponsibleAIForAll report is certainly the first of its kind policy document that makes direct reference to the Constitution and fundamental rights for developing an ethical AI framework. While this is certainly a welcome contribution, the true realisation of these principles in India’s AI ecosystem requires all the policies and frameworks to be designed in coherence with the fundamental values of our constitution. Thus, future research must build on this and think through how this framework will play out with respect to specific use-cases.
Further, designing appropriate technical frameworks to assess the efficiency of anonymisation techniques during collection and processing of data, developing algorithmic impact assessment frameworks to ensure the inclusive and bias-free nature of the algorithms and encouraging AI developers to conduct periodic human rights impact assessment for mitigating the existing social biases and preventing the monopolisation of AI technologies are other important measures for ensuring that our AI ecosystem aligns with the higher principles of Indian constitutionalism.
AIM: In the light of the PDP Bill and the Niti Ayog policy document, what are the steps India needs to take to protect its citizens’ privacy rights without hampering AI’s progress?
Rizvi: In India, the right to privacy was given recognition as a fundamental right as recently in the year 2017. The proposed Personal Data Protection (PDP) framework seeks to protect the informational aspect of this right, by providing a comprehensive system that allows individuals to exercise more control over their data and exercise their right to informational privacy. However, questions have been raised regarding the inadequacy of the said framework to cope with the challenges arising out of the widespread deployment of artificial intelligence and the use of ‘big data analytics’.
The #ResponsibleAIForAll framework has recognised the ever-important interaction of the right to privacy and artificial intelligence by presenting solutions to preserve the privacy of individuals arising out of indiscriminate and non-consensual data collection, non-transparency and accountability of AI systems and biases in the collection. The solutions included the establishment of a data protection framework, with legislative competence. Though the PDP framework is an appropriate step by the Government, the Bill needs to include the nuanced aspects that have been raised in the #ResponsibleAIForAll framework such as the creation of sectoral regulators in the field of data protection for more efficiency or encouraging self-regulation through ‘privacy by design’. Provisions related to rights against automated decision seek to protect individuals to a large extent, but with the proliferation of AI in aspects such as welfare and law enforcement, the absence of an accountability/audit requirement for such systems places individuals at risk of being violated through the deployment of such systems.
Keeping the spirit of the #ResponsibleAIForAll framework, the Government may include accountability and transparency measures for AI systems, while encouraging entities to inculcate ‘privacy by design’ in their operations.
AIM: Is it too premature to push for making India an ‘AI garage’?
Rizvi: The #ResponsibleAIForAll framework of the NITI Aayog envisions India as an AI garage. However, before moving ahead with this policy, robust cybersecurity infrastructure and an effective legislative framework for AI and data protection need to be put in place. There are several countries that are aiming to make themselves one-stop solutions service providers for AI systems. However, to become one, not only the institutional structure but the means to realise the ambitions should also be panned out well.
Providing subsidies to AI startups through the Technology Development Program without any biases; granting proper aid packages to the contractual AI trainers in terms of amending labour laws and social security schemes; improving the research quality in emerging technologies by promoting peer-review publications and innovation in this sector and declaring all AI storehouses as critical information infrastructure under Section 70 A of the IT Act and thereby putting them under the direct regulation of the Computer Emergency Research Team are some of the essential prerequisites for truly releasing the goal of converting India into an AI garage.
AIM: Has the policy document adequately addressed the impact of AI on work?
Rizvi: The Fourth Industrial Revolution has brought with itself new challenges and opportunities. The key to realising its potential is upskilling and re-skilling our workforce. Automation of work itself should not be seen as a challenge. Automation is good. What is crucial here is to understand what Automation entails. It leads to a change in job roles and the ‘intensification’ of jobs. Per Dialogue’s study on the ‘Impact of Industrial Revolution 4.0 on the IT-ITes Sector, adoption of such emerging technologies like AI will mean that part of the existing job-roles will be automated. This in turn will either mean a change in the job role of a worker or intensification of the job, i.e., now one person will oversee multiple functions as part of all such tasks will be automated.
Hence, automation will not take away jobs from the market as long as we keep on upskilling and re-skilling our workforce.
AIM: What are the things to consider while setting up an ethics committee for AI in India?
Answer: An ethics oversight committee is important for promoting responsible data, machine learning, and AI practices and uses. The Indian parliament should have an ethics committee for AI specifically. The committee should consist of Parliamentarians who have experience in creating AI, and those who have a strong understanding of the social implications of AI technology and have an interest in proactively countering ethical conundrums with respect to AI before they arise. Following the UK and US model of the Ethics committee on AI, the committee should also have representation from the topmost officials in the Research and Development of emerging technologies including technical experts and policymakers. These officials should include adequate representation from all communities including women and LGBTQ+ groups and the committee should produce yearly reports on the status of the development and dissemination of AI technology across various social and economic sectors in India.