The approach paper recommends that AIRAWAT ought to have a system where it is funded by the government and facilitated at an academic establishment (Host Institute).
NITI Aayog published the National Strategy for Artificial Intelligence (NSAI) discourse paper in June 2018 to depict the government’s role in the advancement of AI. They have also been using the Approach Papers to create execution plans and showcase key suggestions in a ‘top to bottom’ manner. Such an approach paper ‘AIRAWAT: An AI Specific Cloud Compute Infrastructure’ — authored by Anna Roy, Senior Adviser, NITI Aayog, was this month in January and proposes the design, administration structure, and component of choosing different partners associated with the execution of AIRAWAT.
Features Of AI-Based Infrastructure Sought Under AIRAWAT
While ways to deal with AI computing in India has been to rely upon cloud-based AI platforms from vendors such as AWS and Microsoft Azure, the approach paper featured impediments including information sharing concerns, non-predictable, and high data bandwidth expenses, and so forth. As indicated by the paper, this is appropriate for small subscription-based requirements ideal for enterprise use.
Along these lines, to fulfil the need for across the country AI foundation, and handle the difficulties related with absence of access to computing assets, the approach paper recommends that an AI-explicit compute framework to be built up. With such AI infrastructure, the processing needs of COREs (Centers of Research Excellence), ICTAIs (International Centers Transformational AI), Innovation Hubs, startups, AI researchers, students will be satisfied adequately.
The AI-explicit infrastructure ought to have the accompanying highlights and capacities as per the AIRAWAT approach paper:
(a) Multi-tenant multi-user computing support
(b) Resource partitioning and provisioning, a dynamic computing environment
(c) ML / DL software stack – training and inferencing development kit, frameworks, libraries, cloud management software
(d) Support for varieties of AI workloads and ML / DL frameworks for user choices
(e) Energy-saving, high teraflops per watt per server rack space
(f) Low latency high bandwidth network
(g) Multi-layer storage system to ingest and process multi-petabytes of big data
(h) Compatibility with National Knowledge Network (NKN) for a multi-tenant environment.
Task Force Proposed For Executing AIRAWAT
The approach paper by NITI Aayog prescribed setting up of an inter-ministerial Task Force), with cross-sectoral representation to lead the execution of AIRAWAT. The Task Force may incorporate a representation of both developer community and domain specialists in the field of artificial intelligence to make sure that the plan of the AI infrastructure facility is strong and is based on the true needs of all stakeholders in India. The Task Force should also look for funding for execution and the timeline for setting up AIRAWAT and The Task call for proposition from system integrators through an open bid course, and the model request for proposal document arranged by NITI Aayog.
What About Funding For AIRWAT?
As per the approach paper, the AIRAWAT ought to be viewed as a fundamental public asset financed by the Government of India. The fundamental subsidizing for AIRAWAT might be given by funds under the National Supercomputing Mission (NSM).
The approach paper recommends that AIRAWAT ought to have a system where it is funded by the government and facilitated at an academic establishment (Host Institute). The Host Institute for AIRAWAT might be chosen by a limited call for proposition from top-level educational foundations, through a test method, in light of showed the capacity of hosting such an advanced facility with a promise to expand vital help as might be required.
The evaluated money related expense for building AIRAWAT will incorporate the accompanying segments:
(a) Equipment (GPU/TPU supercomputers, storage, switches for web connection
(b) Facility arrangement and upgrade
(c) Recurring expenses viz. maintenance, staff, training workshops, contingency reserves, and so on. The gear costs will have the accompanying sub-segments: AI explicit processing units (could be GPUs, TPUs, as pertinent), Other servers (data ingestion, cluster managers, inferencing, accelerators), Data Centers, Software: for both hardware and ML/DL, Storage abilities, Network capacities, etc.