Listen to this story
The pressing need for AI regulation has sparked a significant debate in the AI community, resulting in nothing short of a civil war among researchers. Amid such a charged atmosphere, the Indian government has stoked controversy by going against the tide. In a written reply in the Lok Sabha, the Ministry of Electronics and IT (MeitY) said, “The government is not considering bringing a law or regulating the growth of artificial intelligence in the country.”
World over, Italy became the first Western country to ban ChatGPT out of privacy concerns. Meanwhile, the European Union (EU), is bringing in the much-anticipated AI Act this year. In the US too, the government released a blueprint for an AI Bill of Rights.
Why won’t India Regulate AI?
The Indian government has taken a proactive stance on technology, particularly AI, intending to position India as a global leader in the field. The Indian government sees AI as a ‘kinetic enabler’ and wants to harness its potential for better governance.
In the written response, MeitY stated, “The government is harnessing the potential of AI to provide personalised and interactive citizen-centric services through Digital Public Platforms.” The government feels putting stringent regulations in place could stifle innovation. For example, the AI Act drafted by the EU, is seen by many as too stringent. It could potentially “put a lot of unnecessary bureaucracy over companies that are innovating quickly”, Robin Röhm, founder at Genie AI, said.
With no existing regulations on AI worldwide at present, Europe will be the first region in the world to draft laws specific to AI. It makes sense for India to observe and assess the situation rather than hastily regulate AI. Besides, despite the rapid advancement, the government might feel the technology is still at a very nascent stage and regulation is not the need of the hour.
Yann LeCun, the chief AI scientist at Meta, who refused to sign the open letter seeking AI regulation, also shares a similar opinion. He said, “Current AI systems have limited capabilities, so asking for safety measures is premature.”
It does not mean no regulation at all
While the Indian government has said no to AI regulation for now, it certainly does not mean there are no checks and balances in place. The MeitY did state that various central and state government departments and agencies have commenced efforts to standardise responsible AI development.
The government has also recognised the ethical concerns related to AI and the same have been highlighted in the National Strategy for AI (NSAI) released in June, 2018. Further, the technology and its creators will be subject to existing and upcoming laws.
“For instance, the upcoming Digital Personal Data Protection Bill 2022 (DPDPB 2022) will apply to AI developers who develop and facilitate AI technologies. As AI developers will be collecting and using massive amounts of data to train their algorithm to enhance the AI solution, they might classify as data fiduciaries,” Kamesh Shekar, programme manager at the Dialogue, a public policy think tank, told AIM.
This implies that AI developers may comply with the key principles of privacy and data protection like purpose limitation, data minimisation, consensual processing, contextual integrity etc as enshrined in DPDPB 2022. “Besides, as contoured during Digital India Act (DIA) consultation, the government is also considering having provisions within the act which would define and regulate high-risk AI systems,” Shekar added.
A rather strategic approach
While India has opted out of regulating AI, it could still think about alternatives like market mechanisms to appropriately tackle the situation.
Shekar suggests that the government could constitute mechanisms and incentives such that the market takes its course where AI developers move away from the fail-fast fundamentals and consider consumer protection and safety as a value proposition causing competitive advantage. “For instance, creating a market for principles-based accreditation, and enabling a competitive edge for AI developers. The accreditation process must have a well-laid out process and procedure that balances transparency and safeguards to protect intellectual and proprietary information.
“Besides, the accreditation process must be aspirational in a way that it pushes the AI developers toward performing better on the user outcome, i.e., securing informational privacy through better data protection standards, having child safety options etc,” he said.
India is AI positive
According to the recently released annual Artificial Intelligence Index report by Stanford University, around 71% of Indians felt positive about AI products. One of the key takeaways from the report was that a large proportion of GitHub AI projects were contributed by software developers in India—24.2%, to be precise.
Besides, India also has one of the highest number of ChatGPT users in the world. The chatbot by OpenAI has already landed OpenAI in trouble in multiple jurisdictions for misinformation and the collection, use and disclosure of personal information without consent. With the government leveraging the technology extensively for governance as well as providing its services to the citizens, it gets even critical to have frameworks in place to mitigate the risk posed by the technology.
Today, AI is pervasive, and the rise of generative AI has only expedited its widespread adoption. “Now, when these implementations are happening on such a large scale, the possibilities of the technology going wrong is also limitless,” Utpal Chakraborty, chief digital officer at Allied Digital, told AIM.
Regulations might still be necessary
The dangers posed by AI are aplenty. “AI and automation could put people out of work in many different fields, such as manufacturing, transportation, and customer service. This could have a significant impact on the Indian economy and employment levels,” Vikas Kakkar, founder of amara.ai, told AIM.
Hence, a blatant no to regulating the technology might not be the right approach. Instead, the government should take cognisance of the dangers the technology poses in its current stage as well as the dangers it could pose as it matures. It is imperative that the government takes a proactive approach rather than waiting for it to harm before regulating it.
Chakraborty too believes it is the right time to regulate AI. “But it is also essential to ensure that the regulation is not so stringent that it hampers innovation or slows down the implementation of the technology,” he said.
In an earlier interaction with AIM, Chakraborty had said that regulators do not really understand the nitty-gritty of the technology. Hence, regulators must sit at the table with the developers of the technology. Currently, many of the renowned names in the world of AI are calling for regulation. Turing awardee Yoshua Bengio, recognised worldwide as one of the leading experts in AI, has also warned about the dangers AI could pose.
“There is no guarantee that someone in the foreseeable future won’t develop dangerous autonomous AI systems with behaviours that deviate from human goals and values,” he said in a blog post. Bengio believes it is essential to invest public funds in the development of AI systems dedicated to social priorities often neglected by the public sector.