MITB Banner

India’s Answer to Moore’s Law Death

Moore's law will continue to grow strong, spearheaded by a combination of micro-chip advances and macro architectural innovations

Share

Listen to this story

The semiconductor chip manufacturing and design work is in full swing with large players working on building process nodes as low as 3nm. But, there is a limit to how many transistors can be infused on a single chip. Even with the introduction of multi-core processors, in which multiple single-core processors could be attached to increase power, concerns over whether this is enough to sustain in the long run looms. 

Is Moore’s Law dead?

At this point, it seems as though we have reached saturation levels, and chants of Moore’s law—which states that every 12-18 months, the processing power doubles—slowing down or nearing an end have been restored. 

However, a new ray of light—the cloud—has been powering Moore’s law and will continue to do so at least for the next decade or two, propelling the most cutting-edge innovation. To give a fair picture, the following is a chart from Mark Millis’ work that collates the computation power per second over the years. 

According to Millis, Moore’s law will continue to grow strong, spearheaded by a combination of micro chip advances and macro architectural innovations. Among the macro architectural innovations include the cloud, which consists of millions of microchips linked together to perform common tasks, as well as certain additional innovations like three-dimensional chip stacking, and wafer-scale chips

Thus, we are increasingly moving towards distributed computing. Instead of having a single system perform a particular task, the cloud will enable the use of multiple systems and tools—thereby distributing the job. 

At NVIDIA’s online GTC22 conference, CEO Jensen Huang said that Moore’s law is dead and the future is about developing new architectures and good chip design while stressing that “computing is not [only] a chip problem [but] a software and a chip problem.” 

All applications—social media, video conferencing and OTT, to name a few—generate staggering volumes of data. Adding to that, the AI-driven solutions, so rampantly integrated into our everyday lives, are all data-driven. Thus, we are moving from numeric applications to data intensive applications. We need advanced processors to compute such data. “Generating data is becoming easier now, whereas processing is becoming difficult. And, data crunching will require that kind of processing power—that is why Hadoop, big data applications, and cloud computing are becoming more popular these days,” an expert told AIM.

The rise of AI chips

We are witnessing the rise of AI chips leveraged to meet the needs of a data-driven society. A McKinsey research projected that by 2025, the demand for AI-related semiconductors might supersede 20% of all demands, generating $65 billion in revenue. Large language models, such as OpenAI’s GPT-3, DeepMind’s Alphafold, or MetaAI’s MultiRay, have recently risen to prominence. Graphics-based processors and AI hardware accelerators run these models at scale. 

A simple case in point would be how NVIDIA researchers were able to train an AI in genome-scale data on GPU-based supercomputers, like the NVIDIA A100 Tensor Core GPU. The team achieved a performance of over 1.54 zettaflops in training runs, being the largest biological language model yet. 

Adding to the list is Tesla’s D1 chip, having 362 teraflops of processing power. The D1 chip powers Tesla’s supercomputer Dojo, whose primary function is computer vision for its self-driving car technology. Tesla has been collecting data from over a million cars to train the neural network using its AI chip. 

India’s home-grown supercomputer Param Siddhi – AI was established under the National Supercomputing Mission (NSM) at the Centre for Development of Advanced Computing (C-DAC). The supercomputer is powered by NVIDIA DGX SuperPOD architecture and has about 210 Petaflops processing power. 

Supercomputing use cases

Analytics India Magazine spoke to an expert from C-DAC to discuss the role supercomputers will play in future. Following is a list of a few applications of the many that we will see in coming times:  

Weather forecasting: In citing the example of weather forecasting, he said with superconductors, we can predict weather cycles which can hugely impact agriculture, an important aspect of economic growth. Plus, supercomputers can predict natural disasters like cyclones to aid timely evacuation of people. The mathematical model used currently for weather forecasting—the Weather Research and Forecasting (WRF) Model—cannot be done on normal computers; a supercomputer will be required.  

Medical research: He said that during Covid, much of the research was being done on drugs that could fight the pandemic-causing virus. For example, C-DAC published a paper where the researchers found that the Ayurvedic drugs had the same effect as Remdesivir. His point was to show how we can obtain scientific proof by simulating the drugs and studying the protein-binding of the molecule using a supercomputer. 

Research & development: Engineering design is crucial to automobile companies. The parameters of the engine and car are first simulated before developing the car. Tata-CRL, for instance, has found a separate supercomputing wing which inhabits Eka, once featured among the top ten fastest supercomputers in the world. 

Optimisation: An example of optimisation would be in airlines—the route airlines should choose to have maximum fuel efficiency. 

Similarly, supercomputers can help in solving several other linear programming problems encountered in everyday life. 

Supercomputer-as-a-service

While describing the applications, the expert (who preferred to remain anonymous) also did an approximate cost-breakdown of setting up a supercomputer infrastructure in India. 

Taking the NVIDIA DGX machine as the reference, he said that one node would cost up to INR 2.5 crore, and if one node, built with 8 GPU cards, can cost over INR 2 crore, a total estimate could be made based on the number of nodes making up the supercomputer. In addition, there is also the cost of communication – the InfiniBand switches and fibre-based connections. And not to forget, the cooling, electricity, and data centre costs. The investment cost is surely high and only a few companies can afford to have their own supercomputer to run applications. As a result, there is a growing market for supercomputer-as-a-service. In India, C-DAC has opened its services to private enterprises and research organisations to run their applications.   

The expert AIM spoke to also added that the Indian government has given a clear mandate that C-DAC should help organisations and startups and there is a set procedure for it. Private companies can work with C-DAC in two ways: Expression of Interest (EOI) and Intent of Association (IOA). Additionally, there is a direct revenue model where anyone who wants to use it can do so on a pay-per-use basis. 

However, research organisations working in drug discovery, weather forecasting, or other applications can use C-DAC’s services freely. 

Make in India leads the way

There is also a need to strengthen the supercomputing infrastructure in the country. Majorly, the cost being paid for is the processors and RAMs, which are not made indigenously in the country. So, India has to depend on Intel, AMD, and other US-based companies to source these components. As a result, C-DAC has to adhere to whatever rate these companies quote. 

We are already seeing advancements on the chip manufacturing front, with several buyers sending fabrication plant proposals to the government. But building a semiconductor ecosystem is challenging. To be a global leader, in addition to building chips, India should be able to host multiple data centres to be capable of offering services on the cloud. 

The digital prowess of a nation lies in its supercomputing power, and India will only rise up to it if more components are built indigenously, so that the cost of building a supercomputer is brought down as well as any foreign reliance is completely shut. As of now, we already have Rudra, our own microcontroller board, along with some home-grown processors like C-DAC’S Vega and IIT Madras’ Shakti. Currently, C-DAC is also working on developing the HPC processor. But, more such organisations need to step up to lead the supercomputer revolution in India.

Share
Picture of Ayush Jain

Ayush Jain

Ayush is interested in knowing how technology shapes and defines our culture, and our understanding of the world. He believes in exploring reality at the intersections of technology and art, science, and politics.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.