This is the third article in the weekly series of Expert’s Opinion, where we talk to academics who study AI or other emerging technologies and their impact on society and the world.
This week, we spoke to Arindrajit Basu, a graduate from the University of Cambridge in Public Internationals Law, who is currently working as a Research Manager at the Centre for Internet and Society in Bangalore.
Sign up for your weekly dose of what's up in emerging technology.
Basu’s research revolves around the geopolitics and constitutionality of emerging technologies. Part of his recent work looks at India’s planning and implementation of AI and probes areas for improvement.
Analytics India Magazine caught up with Basu to understand where India stands on its AI policies and strategies, and the kind of fundamentals, frameworks, or laws the country needs to ensure inclusive progress.
What does a ‘better vision for AI’ entail?
So far, AI policy has centred around driving and implementing technological solutions to social, economic, or constitutional problems. That is only one part of the problem. Any vision transcends the boundaries of technology deployment or policy. As an aspiring leader in world discourse, India can lay the rules for other emerging economies by incubating, innovating, and implementing AI-powered technologies and grounding it in a structure of rich constitutional jurisprudence empowers the individual, particularly the vulnerable in society. While the multiple policy instruments and the National Strategy are essential parts of the puzzle, the long-term goal can only be framed by how all the actors, interest groups and stakeholders engage with the notion of an AI-powered Indian society.
What does India lack in terms of having a holistic AI vision?
First, it needs to consider, more firmly, India’s rich constitutional ethos and regulate AI on a case by case basis that considers power asymmetries and places vulnerable populations at the forefront. Second,we must ‘constitutionalise AI’. The Indian Constitution can help define and concretise AI governance or ethical AI principles and could be used as a medium to foster genuine social inclusion and mitigation of structural injustice through AI. For example, a key feature of AI-driven applications is the “black box” that processes inputs and generates actionable outputs behind a wall of opacity to the human driver. The black box essentially denotes that the human neural decision-making function has been delegated to the machine. As rightly pointed out in the National Strategy for AI published by NITI Aayog two years back, merely opening up code may not deconstruct the black box as not all people impacted by the AI application may understand the technicalities.
The constant aim should be explicability. This means the developer should be able to spell out how certain factors may be used to arrive at a particular set of outcomes under a given set of situations. The need for accountability is derived from the Right to Life provision under Article 21. As stated in Maneka Gandhi vs Union of India, any procedure established by a legal process must be seen to be “fair, just and reasonable” and not “oppressive, fanciful, or arbitrary.”
This brings me to the third point. Encouraging public participation is undoubtedly a vital element of the national vision of our democratic polity. While NITI Aayog has opened up some of its recent documents to a robust public consultation where stakeholders submit inputs, more general discussion, debate and awareness building should be encouraged. Civil society-advocacy groups, think-tanks, and other grassroots organisations, have a crucial role to play here and should be encouraged to audit, scrutinise, and spread awareness on algorithmic systems.
In your writings, you mention AI+X, where only if the gap X needs filling, you employ AI. How do you define a gap?
The first principle here has to be avoiding technological solutionism, and relying on empirical research, widespread consultation, and pilot studies. Often human problems have human fixes. Let us take the case of agriculture. The governments of Karnataka and Andhra Pradesh came up with well-thought-out AI application for their farmers. Instead of other knee-jerk reactions to agrarian woes, the states did useful research in this case to identify the difficulty in predicting weather patterns, an essential factor for productive crop yields. Realising aggregated data could help with better weather patterns; the governments provided the rural areas with the same using text messages since internet penetration was relatively low.
This is in stark contrast to the approach taken to regulating extremist content in the draft Intermediary Liability Guidelines published by MeitY where “automated tools or other mechanisms” were blatantly advocated without considering the challenges in implementing.
The second lesson from the two examples is the importance of specificity. Don’t throw AI at all the world’s problems. Narrow a problem down to a point where it is clear that delegating a human decision-making process enables a more efficient and equitable outcome, and then devise the solution.
While we have laws against discrimination, are they enough to address the resulting discrimination from algorithmic decision-making systems? What kind of legal frameworks need to be introduced to ensure human rights are not violated?
First, the constitutional law around disparate impact needs to be built upon by the judiciary and more closely understood in the context of algorithmic discrimination. Consider the development of ‘risk profiles’ of individuals for the determination of insurance premiums. Data processed by an algorithm might show that an accident is more likely to take place in inner-city areas due to narrower roads and densely packed population. However, it is also a fact that minority communities tend to reside more in these areas, which means that algorithms could learn that minorities are more likely to get into accidents, thereby generating an outcome (‘risk profile’) that indirectly discriminates on the grounds of identity. Barring a few cases such as NM Thomas, this evolved understanding of discrimination has not yet come from the Supreme Court.
India possibly needs a law to regulate AI’s impact and enable its development without restricting fundamental rights. However, the regulation prescribed by the law should not adopt a ‘one-size-fits-all’ approach that views all of the use cases with the same level of rigour. Regulatory intervention should be based on the problem around power asymmetries and the possibility of the use case adversely affecting human dignity captured by India’s constitutional ethos. Rather than leaving behind the decades of jurisprudence that have driven Indian society thus far, AI law and policy should look to adapt existing legal frameworks that work in the context of present-day challenges.
India’s public sector is already implementing AI across domains? What should be India’s broad goals to achieve through AI to ensure inclusive growth?
I think the specific sector of focus should be determined based on existing information about the sector and the potential harms a misfiring of the solution could cause. As I have argued in another paper with Amber Sinha and Elonnai Hickok, the following broad questions need to be asked:
- Is there either a high likelihood or high severity of potential adverse human impact of the AI solution?
- Can the likelihood or severity of adverse impact be reasonably ascertained with existing scientific knowledge?
Further, it is important to note that several projects are being rolled out as PPPs. It certainly is essential to remember that whenever the state is involved in a public function, as per the Constitution, the entire gamut of fundamental rights and consequent redressal mechanisms apply. Therefore, the state should ensure that the private sector partner working with data, source code or deployment undertakes strict obligations to ensure full-fledged compliance.