Government think-tank NITI Aayog has proposed setting up of an oversight body that will play an enabling role across different aspects of artificial intelligence.
In its draft titled, ‘Enforcement Mechanisms for Responsible #AIforAll’, the think-tank has proposed an oversight mechanism that will play an enabling role in research, education, logistics, and policy among other areas of AI. Subject to change, this draft is asking for comments and suggestions from relevant stakeholders until Dec 15, 2020.
Complementing the national AI policy that was published by the think-tank in 2018, this draft proposes a framework for the enforcement of responsible AI principles. The document calls for a flexible risk-based approach, as risks across use cases and contexts vary.
While keeping principles of Responsible AI up-to-date and enabling access to tools and techniques of the same, the draft proposes for the body to help enable research in technical, legal, policy, and societal issues of AI and provide clarity on responsible behaviour through design structures, standards, and guidelines.
Other responsibilities would include education and awareness on responsible AI, harmonising policies by identifying gaps and coordinating with various sectoral AI regulators, and representing India in International AI dialogue for responsible AI.
A ‘highly participatory’ body that will serve in an advisory capacity, the draft proposes for the body to have a multi-disciplinary composition of experts from relevant fields (like computer science, legal, civil, humanities and social sciences), industry representatives, and government support for interfacing with ministries and departments.
For enforcement, the institutional structure will include an ethical committee, whose composition will depend on the use case. These committees will be constituted for the process of procurement, development, and operations of AI systems and will be made accountable for adherence to the Responsible AI principles.
Independent or ‘Independent’ Body
Described as the “the first body of its kind to be established anywhere in the world” by Jeremy Wright QC, the Centre for Deta Ethics and Innovation (CDEI) was established by the UK government in 2019, with a similar purpose.
Established as an office of the Department for Digital, Culture, Media and Sport, the CDEI is tasked by the government to connect policymakers, industry, civil society, and the public to develop the right governance regime for data-driven technologies.
One of the first projects CDEI was responsible for was to assess the potential bias in algorithmic decision-making systems, for which the body came up with an interim report that sought answers in terms of access to quality data, tools and techniques, and governance frameworks.
The CDEI came up with a detailed report called the AI Barometer this year in June, that analysed the use of AI and data in five key sectors — criminal justice, financial services, health & social care, digital & social media, and energy & utilities.
While it is still early to comment on the overall effectiveness of the body, the report has been called as a “well-thought-out, comprehensive, and accessible piece of work” because of the recommendations it made for the development of technologies in sustainable green growth, public health, and the need to tackle misinformation online instead of focusing on “dystopian singularities and killer robots.”
At the same time, however, it has been criticised for focusing on the need for automated decision making in education. This was just two months before Ofqual decided to use the controversial algorithm for deciding results of UK teenagers, which caused a huge outrage leading to students taking to streets amid the pandemic.
Adding insult to injury, an ‘independent’ review of this algorithm was then given to CDEI, chaired by Roger Taylor, who also happens to be the chairman of Ofqual resulting into a direct conflict of interest.
“Until the CDEI resolves its conflicts of interest and is able to steer a clear, independent course it will not be capable of leading the AI regulation debate,” said David Davis MP, who is leading a cross-party group of MPs championing the idea of a legal framework ‘Accountability for Algorithms Act’, proposed by the Institute for the Future of Work.
Several other experts have also criticised the report for its “corporate focus”, as the report marked the risk of algorithmic bias leading to discrimination as ‘high’ for all the five sectors except energy and utilities.
“The report also doesn’t say anything new or radical,” Sam Smith, policy lead at Medconfidential, an advocacy group for health data privacy, was quoted in the article.
Oversight bodies like the ones proposed by NITI Aayog to promote the responsible use of AI can be beneficial in many ways. Research carried by such bodies can help organisations develop AI with state-of-art tools and technologies that can also ensure fairness and social benefit.
However, at the same time, it is very important for such a body to be truly ‘independent’. Influencing or lobbying by ministries or big corporates to such bodies can be counter-productive to the entire purpose of the mechanism.