Listen to this story
The need for regulation in AI is widely recognised and actively discussed. During his testimony before a US Senate committee, Sam Altman, CEO of OpenAI, advocated for international cooperation and leadership in regulating AI. He urged the US to play a leading role in establishing a global organisation similar to the International Atomic Energy Agency (IAEA), focused specifically on AI regulation.
Last month, legislators in Europe urged US President Joe Biden to convene a global summit aimed at addressing the need for control and regulation of AI development.
Popular AI critic Gary Marcus, in an Economist article, also argued for an international agency to regulate AI. He too advocated for the establishment of an international, non-profit organisation of governments, technology companies, non-profits, academia, and society as a whole to foster collaboration and address governance and technical challenges in AI.
Even though governments in Europe, USA and many other countries are exploring AI regulation, the Indian government has said it won’t regulate AI. The Modi-led administration perceives AI as a catalyst for progress and innovation, and therefore joining an international agency for AI regulation may not align with India’s vision and goals.
Technology and socio-economic conditions
Technology’s impact and implications are shaped by human choices, values, and societal contexts. Due to India’s rich diversity, the impact of AI can vary within the country, influenced by its unique cultural, social, and political contexts.
“We have seen that ChatGPT performs poorly when prompted in non-English languages, as its data set for non-English languages is less pronounced and other technical complications in Natural Language Processing. Since users of AI services may come from different strata of society, the usability of such technologies may also vary depending on the user,” Kamesh Shekhar, programme manager at the Dialogue, a research and public-policy think-tank, told AIM.
India also has unique socio-economic challenges, such as income inequality, poverty, and access to basic services. In India, AI is already playing an important role in addressing these challenges by enabling innovative solutions in areas like healthcare, agriculture, education, and governance. This was one of the primary reasons why India decided against regulation, for now.
Besides, AI would also impact India’s workforce differently compared to the Western countries. “For India, which has a large pool of low-skilled workers, the displacement of workers in certain sectors could have a more significant impact than in the West, where the workforce is more skilled and better equipped to adapt,” Ibrahim Khatri, founder and CEO of Privezi Solutions, told AIM.
The perspective here is that Western countries may be more open to lenient regulations as the impact on their workforce may not be as significant compared to India. In contrast, India may require stricter regulations to mitigate the potential impact on jobs.
A form of neo-colonialism
If such a body is formed, and India becomes a part of it, will India have a significant influence or will it be an organisation created by the West to fulfil Western interests? India, despite being a founding member of the United Nations (UN), and despite consistently supporting the aims and objectives of the UN, still does not have a permanent seat in the UN Security Council.
“Whether India chooses to participate in an international body to regulate AI would depend on a range of factors, including the mandate and scope of the body, our priorities, and the potential benefits and drawbacks of joining,” Khatri said.
Giada Pistilli, the principal ethicist at HuggingFace, is also not in favour of any international agency regulating AI. “ I’m not at all in favour of an international ethical committee for AI. I mean, that’s kind of a form of neo-colonialism, also, from my perspective, because we’re not going to impose our own Western views on India or China, or the African continent as well,” she told AIM.
For example, UNESCO is developing an ethical charter for AI, seen as a significant step towards a universal ethical framework. However, she found the approach cumbersome.
“Striving for generality in such a framework inherently limits the ability to delve into specific cultural backgrounds and diverse value systems. Imposing a singular viewpoint without considering these nuances can lead to the imposition of one’s own perspective rather than fostering inclusivity and respect for diverse perspectives,” she added.
India should self-regulate
It is worth noting that India is already a member of the Global Partnership on Artificial Intelligence (GPAI), an international initiative focused on promoting responsible and human-centric development and utilisation of AI.
However, when it comes to regulation, the Indian government, if ever they decide to, should regulate AI on their own terms. Even though the Indian government has decided against regulating AI, they have recognised the ethical concerns related to AI and the same has been highlighted in the National Strategy for AI (NSAI) released in June, 2018.
In fact, some government bodies have enacted a patchwork of AI regulations, such as the Ministry of Consumers Affairs imposing guidelines for companies promoting ‘AI-Enabled’ products and the Bureau of Indian Standards (BIS) is aligning Indian AI standards with international ones. Additionally, the Indian Council of Medical Research has released guidelines to address ethical considerations in the use of AI in biomedical research and healthcare.
Furthermore, “the Open Network for Digital Commerce (ONDC) has mandated sellers to reveal their algorithms, and the ONDC conducts audits to ensure that the platforms continue to uphold their commitment to transparency,” Khatri said. He believes that the Digital India Act, even though not explicitly focused on regulating AI, can affect AI regulation in India. “The Digital India Act will address data protection, privacy, and cybersecurity issues, among other things,” he said.
“As AI developers will be collecting and using massive amounts of data to train their algorithm to enhance the AI solution, they might classify as data fiduciaries,” Shekar added.