MITB Banner

Should India Regulate Foundational Models Akin to the EU?

The EU AI Act mandates the developers of foundational models to disclose all details, including training datasets

Share

Earlier this month, the European Union (EU) became the first jurisdiction to reach a landmark provisional agreement on AI regulations — one critical aspect was regulating foundational models.

The EU AI Act mandates transparency from developers of foundational models like OpenAI, Cohere, and Google. They are required to disclose all details, including training datasets, to the government. Stricter rules apply to more advanced models, and non-compliance may result in fines of approximately 7% of their global revenue.

Contrary to Europe, India sees AI as a kinetic enabler and significant contributor to its digital economy. While the Indian government, through its various representatives, has acknowledged these concerns, India currently is not working on any concrete laws to govern AI. However, the upcoming Digital India Act will have provisions designed specifically to tackle this issue.

No doubt, the EU has set a precedent by becoming the first jurisdiction in the world, and many other jurisdictions will study the Act closely to assess its implications and consider similar regulatory measures. India will also closely examine the act and how it develops, but the question that arises is whether India should take a similar approach.

Is regulating foundational models a good idea?

The EU’s approach has faced criticism for being perceived as too stringent and potentially stifling innovation. Experts believe it could hamper the competitiveness of European startups against those in the US, UK, or China. 

France, which is home to AI startups like Hugging Face and Mistral AI, along with Germany, and Italy, had previously pushed the idea of self-regulation for makers of generative AI models in an apparent effort to support local startups.

“Regulating foundation models is regulating research and development. That is bad. There is absolutely no reason for it, except for highly speculative and improbable scenarios,” Yann LeCun, Chief AI Scientist at Meta, posted on X. 

The same sentiment was echoed notably by French Prime Minister Emmanuel Macron. “We can decide to regulate much faster and much stronger than our major competitors. But we will regulate things that we will no longer produce or invent. This is never a good idea,” he said, attacking the Act.

Interestingly, the United Kingdom (UK), having exited the EU in 2020, has declared its intention not to hastily implement any AI regulations. “How can we write laws that make sense for something that we don’t yet fully understand?,” UK Prime Minister Rishi Sunak said.

What approach should India take?

India, in contrast, aims to leverage technology to enhance the lives of its billion citizens. Nonetheless, the government has also acknowledged the risks associated with the technology. In fact, Indian Prime Minister Narendra Modi proposed creating a responsible, human-centric governance framework for AI, during its G20 presidency.

“Higher the risk, stricter the rules would be the most basic yet most powerful approach of the EU AI Act, named the risk-based approach,” Aditya Malik, founder and CEO of ValueMatrix, told AIM. 

He believes not just India but other nations can imbibe many viable points from the EU AI Act to strengthen their AI regulations, which are tailored to their fashion. “We must ensure risk identification and mitigation is sound and there is strong surveillance that keeps a watchful eye on the AI systems.”

As developers, such as OpenAI, refrain from disclosing critical details about their models, including parameters, size, architecture, hardware, training compute, dataset construction, and training methods, the absence of transparency raises legitimate concerns.

India, through the Digital India Act, may seek a comparable degree of transparency, which is welcoming, but it should not come at the cost of innovation. Moreover, the Personal Data Act applies to AI developers who develop and facilitate AI technologies.

“As AI developers will be collecting and using massive amounts of data to train their algorithm to enhance the AI solution, they might classify as data fiduciaries,” Kamesh Shekar, programme manager at the Dialogue, a public policy think tank, told AIM.

This could be the driving factor behind OpenAI’s decision to enlist Rishi Jaitly, a former Twitter executive, to navigate the company through the intricacies of India’s AI policy and regulatory landscape. 

Previously, OpenAI has lobbied the EU government to make the rules more favourable in their favour.  Moreover, the new rules will compel technology companies to inform individuals when they are engaging with a chatbot, biometric categorisation, or emotion recognition system. 

It also mandates the labelling deepfakes and AI-generated content and designing systems to facilitate the detection of AI-generated media, which are welcoming, and India should consider similar regulatory measures. 

Irrespective of the steps India takes, it would also be critical for India to ensure the regulations are not so stringent that they hamper India’s startup ecosystem. 

India’s generative AI ecosystem is getting started 

In India, the generative AI ecosystem is just getting kick-started. For example, Sarvam AI is developing a platform to deploy Indic LLMs-powered applications which can have an impact on a population scale. Bhavish Aggarwal, founder of Ola, also recently announced his AI endeavour called Krutrim, which aims to build vernacular LLMs.

To help Indian startups, the Narendra Modi-led administration is planning to build AI computing capabilities to further drive India’s AI ecosystem. Moreover, the government of India has emphasised the importance of developing a sovereign AI programme, which can only be achieved through a public-private partnership. 

At the recently held GPAI Summit in New Delhi, Indian Prime Minister Narendra Modi recognised the pivotal role of AI in accomplishing Sustainable Development Goals (SDGs), particularly within the realm of agriculture.

Hence, opting for stringent regulations at this stage will harm India. India should adopt a balanced approach to AI regulation, avoiding strict measures that stifle innovation. While promoting technological advancement, some form of regulation is crucial to safeguard citizens from the potential negative impacts of AI models.

Share
Picture of Pritam Bordoloi

Pritam Bordoloi

I have a keen interest in creative writing and artificial intelligence. As a journalist, I deep dive into the world of technology and analyse how it’s restructuring business models and reshaping society.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.