Listen to this story
On August 23, top executives from the Indian arms of Google, Amazon, Apple, Netflix, and Microsoft will appear before the parliamentary standing committee for finance on the issues pertaining to anti-competitive practices in the digital space. The agenda for the meeting is ‘Oral evidence of the representatives of big tech companies on anti-competitive practices’. Earlier, representatives from Swiggy, Zomato, Ola, Flipkart, Oyo and Indian Gaming Association were summoned due to complaints of alleged anti-competitive practices.
But how are these tech firms engaging in anti-competitive practices, given India already has a dedicated law prohibiting such practices. The answer lies in the specific provisions in the law. Although, at the outset, the law prohibits anti-competitive practices like collusive bidding, coordinating prices and production to mimic a monopoly or restricting the market output to increase prices and profits, however, the law doesn’t specifically cover the use of AI as a means of colluding among competitors.
Sign up for your weekly dose of what's up in emerging technology.
Most tech firms today leverage AI to extract the maximum market value possible. For example, e-commerce giants Amazon and Flipkart have large repositories of data, which they analyse with the help of AI and ML to target advertisements based on consumer preferences. This helps them grab a large share of the market while marginalising competitors who aren’t able to capture the market due to lack of access to data. In many cases, apparent competitors may be using joint pricing algorithms that coordinate prices on their behalf as a means of indirect collusion.
The legislative vacuum compels one to think that a specific legal provision restricting the use of AI in anti-competitive practices may have been helpful. However, anti-competition isn’t the sole factor dictating the need for a dedicated AI legislation.
Myriad factors driving the need for legislation
The pandemic accelerated the adoption of AI across all sectors. Businesses resorted to AI-powered tools to minimise workforce deployment in the backdrop of social distancing.
According to the PwC report ‘AI: An opportunity amidst a crisis’, India traced the maximum use of AI during the pandemic when compared to other major economies like the US, UK and Japan. Over 70% of Indian enterprises implemented AI in some form. The rapid advancement and extensive applications of AI-ML too have triggered discussions around the need to have a dedicated law on the same.
AI systems require a lot of training data. Thus, there are bound to be serious privacy concerns, especially when using someone’s personal information. Lack of sufficient privacy protections may allow technology to completely capture and analyse a person’s private life without their awareness or consent, harming their interests. Such harm could be economic, for example, stealing an individual’s credit card information or emotional, in the case of an individual’s personal information becoming a subject of public discussion.
Last month the government scrapped the Data Protection Bill. While personal data is protected under the fundamental right to life and certain provisions of the Information Technology Act 2000, data other than personal data is not governed by specific legislation. Moreover, when anonymised, data ceases to be ‘personal’ and can be used for various analyses.
Of late, the use of AI for malicious intent has increased manifold. For example, today, deepfakes are used to propagate misinformation that can have serious consequences on society. The vulnerabilities in machine learning models may often be exploited to launch adversarial attacks that have severe real-world repercussions. A definite legislation is thus required to effectively deal with such cases.
Legislative efforts around the world
Several countries around the world are coming up with dedicated laws governing the AI systems. Last March, China came up with regulations for Internet Recommender Systems that provide ‘Internet information services’ within the mainland territory of the PRC. Chinese authorities feel that recommendation technologies can have a harmful influence on users. They can stir up controversies leading to social divisiveness. Recommender systems and content decision systems can undermine individual privacy since they rely on the “collection and processing of private personal information of users”. They can also potentially undermine national security.
In 2021, the EU proposed the AI Act that seeks to ensure that AI systems are safe and respect the fundamental rights of people under its jurisdiction. The act further seeks to prevent market fragmentation inside the EU. The proposed act has already become a centre of discussion across the world. Following the proposal, the Brazil Congress passed a Bill to create a legal framework for AI. In June 2022, Canada introduced the Digital Charter Implementation Act in the House of Commons. It features three legislations that seek to strengthen Canada’s data privacy framework and ensure the responsible development of AI.
India’s regulatory approach towards AI so far
Currently, there are no specific laws in India with regard to regulating AI, ML and big data. A few minimal obligations are mentioned in the IT Act-2000 and the rules thereunder. MEITY, the executive agency for AI-related strategies, recently constituted four committees to bring in a policy framework for AI.
The Niti Aayog has come up with a list of seven principles for responsible AI that includes principles of safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability and protection and reinforcement of positive human values. These principles are expected to safeguard the public interest and promote innovation through increased trust and adoption.
The think-tank has also collaborated with several AI technology players to implement AI projects in critical areas like education, agriculture and health.
Additionally, the department of telecommunication has established an AI standardisation committee to develop various interface standards and India’s AI stack.
The judiciary plays a vital role in enforcing specific provisions. The Supreme Court and high courts have the constitutional mandate to enforce fundamental rights, including the right to privacy.
Domestic regulations apart, India is a part of the Global Partnership on Artificial Intelligence that guides the responsible development and use of AI, keeping human rights, inclusion, diversity, innovation and economic growth in mind.
Towards the end of last year, MEITY, in response to a question in the Lok Sabha, said that the government did not have any plans to legislate AI. However, at the recently concluded ‘AI in Defence’ symposium, defence minister Rajnath Singh warned India to be ready to face the upheaval that AI will bring in the near future.
One way to deal with the impending upheaval is a dedicated legislation. However, legislators and policymakers need to keep in mind that any such legislative framework must provide enough room for stakeholders to factor in newer requirements. It should be enabling in character and provide for innovation. All internal programs and protocols of various stakeholders should be designed based on this framework.
Whether or not a dedicated AI legislation comes up in the near future, there’s no denying the dire need for it.