As the world of artificial intelligence (AI) evolves, there is a growing need for rules that enable these technologies to operate safely. And while it may be critical to place AI within a legal and regulatory environment, it would be just as important to address the concerns this could pose in stymieing innovation and growth.
Unlike promising technologies from a few decades ago, the AI industry has expanded so rapidly that it has been largely left out of the regulatory framework. This has allowed companies free rein to use it as they deem fit, creating suspicion in the public eye — a trend that industry players like SpaceX CEO Elon Musk and Google CEO Sundar Pichai have taken cognisance of.
Thus, an AI regulation framed in a way that provides competitive advantages to these companies, while protecting consumers from new risks is the need of the hour.
Data Protection & Privacy
Since AI systems are fueled by data, companies are increasingly using customer information — with or without their knowledge. This means that without proper regulation, the responsible use of AI by companies is largely predicated on trust. This is not enough.
What is more, not just organisations, but countries across the world are looking for a competitive advantage in AI — one that will allow them to strengthen their industrial ecosystem and military prowess, among others.
Given the difficulty in even establishing a clear definition of AI — one that is commonly accepted by those who operate in this ecosystem — a universal law that stretches from company t0 company, and nation to nation, will be impractical. Instead, effort needs to be directed towards growing local AI regulatory ecosystems.
Although some headway has been made by the EU towards setting AI policy principles with the General Data Protection Regulation (GDPR), more participation is required to help safeguard the use of data.
Creating An AI Strategy
One way governments can accomplish this is by inviting industry experts to educate themselves on how AI is being deployed and the challenges it is likely to face. However, even if the onus of laying out an AI policy rests on governments, the private sector should initiate these conversations, rather than allowing less-informed legislators to set the ball rolling.
A good place to start will be to come to a consensus on the definition of AI. Following this, they should provide informed insights and articulate their needs and demands. Concurrently, they should also demonstrate how AI applications can improve the productivity of government activities. This includes how AI can protect sensitive information and mitigate issues in the face of emerging cyberattacks, augment effective decision-making, as well as automate certain processes.
India’s AI Story
India’s strategic positioning with respect to AI initiatives has been inadequate. This is reinforced by the fact that it is lagging behind many countries in this space, including France, China, Israel, UK, US, Canada, Germany, Japan and South Korea.
According to a report released by Niti Aayog, lack of regulations around anonymisation of data has stood in the way of India truly embracing the benefits of AI. To counter this, India needs to identify key governance issues related to AI and propose relevant policy remedies. For this, it can take some inspiration from the guidelines released by the US for the regulation of AI applications.
The charter establishes the framework that any probable legislation can be built upon. What is more, by taking a sectoral approach to this, the US has demonstrated that it makes little sense in charting a one-size-fits-all path. In turn, this encourages sectoral regulators to formulate rules within their own jurisdiction, making the overall regulation process more effective.
However, the real challenge will be to create rules that serve to protect consumers while promoting industry innovation. Rather than welcoming regulation reluctantly and perpetuating the belief that it could be a hindrance to development, India’s AI regulatory framework should capture both these nuances. What is more, this regulatory structure needs to be flexible and will have to be continually updated to reflect the understanding of new risks.
Outlook
AI is increasingly impacting all aspects of our daily lives. However, at the same time, we are also becoming more and more cognizant of the possible risks it can pose. Since the potential it holds is vast, mulling bans or prohibition on its applications is not the answer.
Instead, policymakers need to chart a regulatory roadmap that encapsulates both consumer protection as well as innovation growth.