Now Reading
Are We Seeing A Deluge Of Supercomputers

Are We Seeing A Deluge Of Supercomputers

Srishti Deoras
W3Schools

Earlier this year, Microsoft announced its supercomputer hosted in Azure Cloud which it developed in collaboration with OpenAI. The company said that the supercomputer could train various artificial intelligence models and comes with more than 2,85,000 CPU cores, 10,000 GPUs, and 400Gbps of network connectivity for each GPU server. 

Not just this, Hewlett Packard Enterprise recently acquired the supercomputing leader Cray, following which it has introduced the HPE Cray supercomputing line that can perform data-centric AI workloads with exceptionally high speed. It also built the new TX-GAIA (Green AI Accelerator) computing system at the Lincoln Laboratory Supercomputing Center which has been ranked as the most powerful AI supercomputer at any university in the world. With a performance of 100 AI petaflops, it can perform complex deep neural network operations with much ease.

Fugaku by Fujitsu is another supercomputer recently built which is tailored to run machine learning algorithms. Still, in testing mode, it is expected to be operational from 2020 and will prove crucial to scale up machine learning algorithms at a quick pace.



Furthermore, the US Department of Energy (DoE) also announced plans to build the world’s fastest supercomputer – Frontier, which will be jointly developed by AMD and Cray. With a computing speed of 1.5 quintillion calculations per second, it is expected to join Intel’s Aurora to become the second of the two exascale systems planned by US DoE for 2021. It is expected to perform advanced tasks in AI, cancer treatment research, nuclear physics and more. Aurora, which is speculated to be worth $500 million, is five times faster than IBM’s summit, and was also designed from ground up with AI in mind. 

In fact, one of the latest developments in the field of supercomputers is NEC’s SX-Aurora TSUBASA which is one of the smallest supercomputers designed for AI and ML workloads. NEC has been renowned for its vector processors which gives an advantage while performing AI tasks. We recently covered an article comparing it with Nvidia’s A100, which is one of the most preferred systems today to carry AI workloads. 

Not just the tech giants, but startups are also venturing into the domain of supercomputers. For instance, Canada-based Q Blocks uses peer to peer computing technology to build affordable supercomputers for small organisations and individual researchers that cannot afford expensive supercomputers. It is offering powerful GPUs at a reasonable cost to carry out tasks in AI and data science. 

Back in India too, it has begun to take a lead in the supercomputers market. It has supercomputers such as PARAM, Anupam, Pratush, Mihir and others which have been developed for tasks such as data analysis, weather forecasting, and more. In fact, the Council of Scientific and Industrial Research had joined hands with Nvidia to set up the Centre of Excellence at CEERI which was speculated to house India’s first-ever AI supercomputer

These and many more instances suggest that there is a flooding in the number of supercomputers across the globe. The computing beasts which were once owned by few select companies such as Nvidia and AMD, are now being explored by many other companies, research organisations and even startups. 

Why Is The Supercomputer Market Growing At A Rapid Pace

While supercomputers have been a key requirement in domains such as physics and space science applications, the increased adoption of artificial intelligence and machine learning have called for increased demand in supercomputers that can perform about a quadrillion computations per second. In fact, the next wave of supercomputers in the form of exascale supercomputers is further increasing the capability in these domains. 

Studies suggest that the supercomputer market is expected to grow exponentially in the next few years, at a CAGR of 28% in 2020-2024, with companies such as HPE acquired Cray, Dell, Atos, Fujitsu, IBM are leading the way in the supercomputer market. 

See Also
IIT Delhi Allocates Supercomputer Resources For COVID-19 Research To Merit-Based Proposals

As the organisations are seeing a deluge in extracting insights from big data and training AI models, there is a need for a better computing infrastructure that can handle data-intensive workloads. With the large storage and high processing capabilities to process large volumes of data, supercomputers are becoming a norm in the AI industry including areas such as computer vision, natural language processing, autonomous driving and more. 

Even as researchers are looking to move from building smaller AI models to large scale models with tasks such as recognising objects, driving a car, reading text and more, it requires high efficiency and perfection which can only be accomplished by training on large volumes of relevant data. Supercomputers are a necessity when it comes to making large scale AI models with training optimisations to carry these tasks. 

The efficiency of cloud computing has further allowed supercomputers to dwell well. It has allowed the researchers and companies easy access to large volumes of data from across the globe, thereby boosting the AI research using supercomputers.  

Training a deep learning model might require a cluster of 1000 machines with at least 16000 cores which are facilitated by supercomputers. GPUs that power these supercomputers were originally developed to accelerate the 3D games which further got explored by the scientific community to accelerate numerical applications, slowly progressing to the AI community.

Wrapping Up 

The supercomputer industry will continue to see a deluge in the coming years as the research and development in AI and machine learning is only going to increase. It is further speculated that an increased focus on energy-efficient supercomputing and the development of smart cities will result in a growth of the supercomputer market value. Other factors such as the rise of IoT devices generating zetabytes of data, adoption in weather forecasting, defense research, and medical applications will increase the demand for supercomputers. Furthermore, to accelerate research and development in these fields, it can be said that the more the number of supercomputers, the better it is. 

What Do You Think?

If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top