The Government of India has recognised that an AI-driven economy can transform the lives of millions. Leveraging AI for inclusive growth is one of the core principles identified in NITI Aayog’s National Strategy paper. It is the path for much-needed job creation in various sectors, apart from creating new business opportunities and helping increase household incomes. But nationwide AI can only be done by creating datasets that combine these information systems that power e-schemes already established in India.
India’s AI revolution will require new architecture designs and upgraded technologies to make real-time decisions in an efficient manner. To integrate AI with Indian sectors, it will need a nationwide strategy that is centred on uniform AI standards and practices. Apart from that, an AI-centred smart economy will need extensive investment at technical and skills level. Hence, the government, along with private sector players, including manufacturers, service integrators, cloud service providers, etc., need to come together and coordinate in the development of an AI framework.
What are the various steps that are being taken when it comes to establishing a technical framework for the adoption of AI in India?
There are technical challenges in the form of scalable and robust platforms that can ingest zettabytes of large data sets. In a recent discussion paper, India’s AI Standardisation Committee has outlined the issues related to developing a framework of an Indian AI stack.
This paper proposes a stack that seeks to remove the impediments to AI deployment by putting in place a comprehensive framework. A framework that will create an enabling environment to exploit AI productively in various walks of life. This will enable the development of a suitable AI stack with a different mix of layers and interfaces that complements each other and achieves integration.
One of the major advantages of this proposed Indian AI stack is that it will facilitate open API integration and build the AI architecture from the ground up. It also ensures the creation of a common Data controller, including multi-cloud scenarios-private and public, as part of the infrastructure layer.
To address some of these bottlenecks from a standardisation point of view, the Department of Telecommunications (DoT) had formed an AI standardisation committee to develop various interface standards and develop India’s AI stack. It will facilitate the implementation of standards for the AI developers and coders with compliance verified through proper algorithmic auditing. This necessitates openness in AI algorithms and enables clearly defined data structures.
Infrastructure is an important part of the Indian AI stack. Infrastructure will harness hardware and software innovation to deliver unprecedented products and services in the economy. This can also involve multi-cloud scenarios- both private and public clouds to define the infrastructure. The broad specifications for this include ML / DL software stack, training and inferencing development kit, frameworks, libraries, cloud management software and more. The layer will ensure the creation of a common data controller. The data controller will determine the purposes for which and the means by which personal data is processed for use by various layers.
The foundational component of any type of AI/ML approach is big data as ML algorithms work best when you feed them large data sets. Thus, there is a necessity to assure proper storage frameworks for AI, including multi-layer storage systems to ingest and analyse multi-petabytes of big data. Data storage is the most important layer, regardless of size and type of data.
To derive value from data, it needs to be processed, and to process it efficiently, it needs to be stored in an effective manner. Even if the best data engineering is implemented, it is practically not feasible to augment and utilise data and gain repeated value out of it without having the right data storage. Having a very clearly defined data structure is the key to making it accessible seamlessly across domains and for various use cases.
AI Integrity & Security
While AI offers huge potential to transform and realign the economy and society, there is an increasing realisation that AI could also exacerbate problems for people, without proper safeguards. For AI to ensure a sustainable revolution, there is a need to provide an open environment with safeguards and oversight.
Through defined data structures, this layer will ensure the process of security and governance. Such a broad plan would solve various issues across the industry as a whole and allow AI software to make fair decisions using unbiased data and transparent practices. Regulatory standards for data collection, interfaces, storage, analysis, application and customer use are also required. It can control existing risks and can preempt future risks by suitable monitoring and auditing of the AI’s design and analytics as part of the stack design. Due to the overwhelming flow of information, there is thus a need to ensure encryption at different levels.
With the help of defined data structures and proper interfaces and protocol, the end customer interface should be defined. The layer of AI stack will have to support an appropriate consent framework for access to data by/for the customer. Typically there could be different Tiers of consent available to accommodate different tiers of permissions. The information exchange also needs to ensure that proper ethical standards are followed while ensuring the requisite digital rights. The data/information exchange also defines APIs access for interfaces to different types of applications. There will also be a web-based user interface designing tools to create, modify, test and deploy different UI scenarios.
Embedding AI or ML in national systems is a piece that has to come from the government, not merely private tech companies to make it successful. Government employees therefore need to have skill sets to make AI successful. And hence, data literacy is important.
Senior government leaders are starting to be acutely aware of the value of data and are realising value outside of traditional IT and cyber use cases. State governments are taking an interest in how they can leverage data so they can have mashups of geospatial data maps, overlay road accident data, crime data and others to start taking a predictive look at the scenarios.
If government officials, bureaucrats and policymakers are data literate, it will help them understand what it means to execute good cyber hygiene and access the various tools/interfaces for rapid data-driven decision making.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
Join Our Telegram Group. Be part of an engaging online community. Join Here.
Vishal Chawla is a senior tech journalist at Analytics India Magazine and writes about AI, data analytics, cybersecurity, cloud computing, and blockchain. Vishal also hosts AIM's video podcast called Simulated Reality- featuring tech leaders, AI experts, and innovative startups of India.