Now Reading
How Supercomputers Help To Create The Next Generation of Fully Integrated Data Centres

How Supercomputers Help To Create The Next Generation of Fully Integrated Data Centres

W3Schools

“Data centre is an asset that needs to be protected”- Michael Kagan, CTO of NVIDIA

On the first day of the NVIDIA GPU Technology Conference, Jensen Huang, founder of NVIDIA revealed the company’s three-year DPU roadmap that featured the new NVIDIA BlueField-2 family of DPUs and NVIDIA DOCA software development kit for building applications on DPU-accelerated data centre infrastructure services.  

Michael Kagan, CTO of NVIDIA recently in a talk, explained the next generation of fully integrated data centres and how supercomputers and edge AI helps in augmenting such initiatives.

Kagan stated that the state-of-the-art technologies from both NVIDIA and Mellanox created a great opportunity to build a new class of computers, i.e. the fully-integrated cloud data centres that are designed to handle the workload of the 21st century.



The AI Cloud & Edge Revolution

Historically, servers were the unit of computing, But eventually, Moore’s law has slowed down as the performance of CPUs could not keep up the workload demands. According to Kagan, with the revolution of Cloud AI and edge computing, instead of a single server, the entire data centre has become the new unit of computing designed to handle parallel workloads. 

The supercomputer consists of three basic elements, the GPU, DPU and CPU. The combination of CPUs, GPUs, and DPUs creates the next generation of supercomputing from the edge to the data centre. The CPU handles the application for work. The GPU helps in accelerating the computing of workloads using the powers of AI and machine learning. 

In simple words, GPU helps in heavy-lifting data processing, and the DPU helps in accelerating the data-intensive tasks. This means it feeds the GPU with the data at the right proportion, which matches the GPU processing power. The DPU is essential to disaggregate the resources and make a data centre composable.  

High-Performance Computing (HPC) and AI are the essential tools fueling the advancement of science. GPU-accelerated data centres deliver breakthrough performance for compute and graphics workloads, at any scale with fewer servers, resulting in faster insights and dramatically lower costs. 

According to Kagan, the NVIDIA GPU and networking technologies are the engines of the modern high-performance computing data centre that are meant to deliver the breakthrough performance and scalability. 

In order to handle the ever-growing demands for higher computation performance, and the increase in the complexity of scientific problems, the new data processing unit (DPU) was created. 

Reinventing The Data Centre

Kagan further discussed the steps that are crucial for reinventing the data centre. The reinventing of the data centre involves two main steps. They are

1| Reinvent the Compute Node- Each compute node can host multiple containers over the network. This includes the network interface (ConnectX NIC), data processing (Bluefield-2 DPU) and the AI-powered DPU (Bluefield-2X DPU).

See Also

2| Reinvent the Network- This includes the Bits mover and the fan-out data processor.

Security Challenges In Cloud Data Centre

Traditional enterprise data centres run certified software and have reasonably sufficient security measures against malicious software. But now, cloud data centres present new challenges. Unlike traditional data centres, there is no control in measures in cloud data centres. 

Therefore, traditional protection has become almost meaningless. This brings the change in the whole security paradigm. Cloud data centres need extra security than the traditional one as each software can contain malicious data. Kagan further mentioned that host-based security had faced various failures over a span of 30 years.  

Reinventing the Network

Talking about the network, Kagan explained that a new protocol had been developed, known as Scalable Hierarchical Aggregation and Reduction Protocol (SHARP). The SHARP protocol is claimed to provide flat latency with seven times higher performance in the network as well as dramatically increase the scalability in the solution.   

In the deep neural network model training, the SHARP protocol performs gradient consolidation by replacing all the physical parameter servers. The protocol not only accelerated the performance of the AI model but also shortened the training time. 

What Do You Think?

If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top