US-based National Institute of Standards and Technology (NIST) has made available the final draft plan to prioritise US federal participation in the development of standards for artificial intelligence (AI). Titled U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools, the draft contains guidelines on how US should formulate the standards on AI. The plan outlines different activities that would enable the administration to advance the utilisation of AI and records various state rules that ought to formulate any future standards for the tech. The draft comes as one of the US government’s most solid strides toward setting guardrails on an innovation that could have significant negative repercussions if left unchecked.
Here are the fundamental objectives outlined in the draft:
- Bolster AI standards-related knowledge, leadership and coordination among federal agencies to maximise effectiveness and efficiency
- Promote focused research to advance and accelerate broader exploration and understanding of how aspects of trustworthiness can be practically incorporated within standards and standards related tools
- Support and expand public private partnerships to develop and use AI standards and related tools to advance reliable, robust and trustworthy AI
- Strategically engage with international parties to advance AI standards for United States economic and national security needs
The draft says that AI standards developed in the years ahead should be flexible enough to adapt to new technologies while also minimising bias and protecting individual privacy. While some standards will apply across the broader AI marketplace, NIST advised the government to also examine whether specific applications require more targeted standards and regulations.
NIST Draft Is An Answer To China’s Comprehensive AI Policy
NIST’s draft for the US government has been conceived out of a February President’s Executive Order 13859 that approached offices to increase their interests in AI as worldwide contenders like China work to support their own AI abilities. China has the most comprehensive policy draft on AI, and the country released both guidelines and standards in July 2017 itself under the title The New Generation Artificial Intelligence Development Plan. The document included numerous policies and billions of dollars of investment in research and development from ministries, provincial governments and private corporations. The government in China wants to create a domestic AI industry worth 1 trillion RMB ($ 140 billion)
China Electronics standardisation Institute (CESI), the major research organisation in charge of creating AI standards under the Ministry of Industry and Information Technology (MIIT) in China released Artificial Intelligence standardisation Whitepaper in 2018, which abridge China’s AI standardisation policy framework and China’s plans for creating AI abilities going ahead.
CESI has set up three working groups under the framework: one working group to create rules for setting up the AI standards, second working group concentrating purely on AI and open source innovation, and another on AI ethics.
A portion of the China’s AI standards driven by CESI have just been finalised, like Specification of Programming Interfaces for Chinese Speech Recognition Internet Services. More standards are a work in progress and to be released in the near future. These standards are spread across various classifications of testing and assessment for AI platforms inside the nation.
Where Does India Stand When It Comes To AI Standards?
India is nowhere near China in these particular fields because of absence of extremist control that China has over its populace and organisations. The regulators here like US are looking at a tighter cooperation between private, government, not-for-profits and educational bodies, which will take time.
Taking a positive move in this direction, government-sponsored think tank Niti Aayog has identified a number of challenges to creating the right environment for AI innovation in its report titled National Strategy For AI. This includes lack of broad based expertise in research and application of AI, absence of enabling data ecosystems – access to intelligent data, high resource cost and low awareness for adoption of AI, privacy and security, including a lack of formal regulations around anonymisation of data, and absence of collaborative approach to adoption and application of AI.
NITI Aayog has embraced a three-pronged methodology – undertaking exploratory POC AI projects in different areas, making a national policy structure for energetic AI ecosystem in India, and teaming up with different specialists and partners. NITI Aayog has cooperated with a few driving AI innovation players to execute AI extends in core areas, such as agriculture and healthcare.
The think tank gave more than 30 policy proposals to put resources into logical research, by empowering reskilling and preparing, quickening the reception of AI over the value chain, and advancing ethics, protection, and security in AI. Its primary activity is two-layered coordinated methodology to lift research in AI.
First is ‘Centers of Research Excellence’ in AI or COREs to focus on fundamental research. Second, the COREs will go about as innovation feeders for the ‘International Centers for Transformational Artificial Intelligence’ or ICTAIs, that will concentrate on making AI-based applications explicitly in areas that have national significance. It also suggests setting up of a ‘consortium of Ethics Councils’ at CORE and ICTAI, which will develop sector specific guidelines on privacy, security and ethics that will create a ‘National AI Marketplace’ to increase market discovery and reduce time and cost in the collection of data.
Why Do We Need AI Standards?
As the technology is advancing rapidly, there seems to be a race among big nations to formulate AI standards. This is obviously has to do with advancing innovation. But, other than innovation, there is also the question of the ethics of AI, which governments expect to tackle with some amount of regulation. Artificial intelligence depends on huge data sets concerning individual users and there is an entire discussion about who possesses the information, and how that information is utilised to further control AI based applications and related services.
So, there needs to be coordinated effort among all the stakeholders, private and public. The world governments aim to set up certain systems and strategies about how AI algorithms are created and how the date is gathered in the whole procedure. Developing standards that enhance the quality of AI products and services may also reduce the risk of societal backlash to the technology. If India wishes to catch up with China, US and other countries in AI, it will have to invest huge funds in establishing the needed technology ecosystem and formulate a framework for ethics and standards in AI.
Join Our Discord Server. Be part of an engaging online community. Join Here.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
Vishal Chawla is a senior tech journalist at Analytics India Magazine and writes about AI, data analytics, cybersecurity, cloud computing, and blockchain. Vishal also hosts AIM's video podcast called Simulated Reality- featuring tech leaders, AI experts, and innovative startups of India.