Key announcements from NVIDIA at ISC 2022

Venado will be the first system in the US to feature a mix of Grace CPU Superchip nodes and Grace Hopper Superchip nodes.
Image © Analytics India Magazine
Listen to this story

ISC High Performance is a conference focused on the most critical developments and trends in HPC, machine learning, and high performance data analytics and how to successfully apply these technologies in science, engineering and commerce.

ISC 2022 is the first in-person conference since the pandemic. The conference is being held from May 29 to June 2 in Hamburg, Germany. This year’s event has been sponsored by big tech companies like NVIDIA, AWS, Oracle, HPE, Dell among others. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

At the conference, NVIDIA  made several announcements around High Performance Computing. Here’s a round-up.

NVIDIA Grace Superchips

World’s leading computer makers like Atos, Dell Technologies, GIGABYTE, HPE, Inspur, Lenovo and Supermicro are adopting the new NVIDIA Grace superchips to create the next generation of servers turbocharging AI and HPC workloads for the exascale era.

NVIDIA Grace CPU Superchip and NVIDIA Grace Hopper Superchip provide manufacturers the blueprints needed to build systems that offer the highest performance and twice the memory bandwidth and energy efficiency of today’s leading data center CPU.

Venado,a heterogeneous system of Los Alamos National Laboratory, will be the first system in the US to feature a mix of Grace CPU Superchip nodes and Grace Hopper Superchip nodes for a wide set of applications.

Alps, the Swiss National Computing Center’s new system,will use the Grace CPU Superchip to enable breakthrough research in a wide range of fields.

“Across climate science, energy research, space exploration, digital biology, quantum computing and more, the NVIDIA Grace CPU Superchip and Grace Hopper Superchip form the foundation of the world’s most advanced platform for HPC and AI,” said Ian Buck, vice president of Hyperscale and HPC at NVIDIA.

The NVIDIA Grace CPU Superchip features two Arm-based CPUs, connected coherently through the high-bandwidth, low-latency, low-power NVIDIA NVLink-C2C interconnect. It pairs an NVIDIA Hopper GPU with an NVIDIA Grace CPU in an integrated module connected with NVLink-C2C to address HPC and giant-scale AI applications.

The NVIDIA Grace-powered systems will run the portfolio of NVIDIA AI and NVIDIA HPC software for full-stack, integrated computing.

NVIDIA cuQuantum software development kit

After AWS announced the availability of cuQuantum in its Braket service, Menten AI will  use cuQuantum’s tensor network library to simulate protein interactions and optimise new drug molecules. Menten AI is developing a suite of quantum computing algorithms, including quantum machine learning to break through computationally demanding problems in therapeutic design.

“While quantum computing hardware capable of running these algorithms is still being developed, classical computing tools like NVIDIA cuQuantum are crucial for advancing quantum algorithm development,” said Alexey Galda, a principal scientist at Menten AI.

Hybrid systems

As quantum systems evolve, the next big leap is a move to hybrid systems: quantum and classical computers that work together. For this, the need of the hour is a fast, low-latency connection between GPUs and QPUs that will let hybrid systems use GPUs for classical jobs where they excel, like circuit optimization, calibration and error correction. Also, a unified programming model with tools that are efficient and easy to use.

NVIDIA BlueField DPUs

Across Europe and the US, HPC developers are supercharging supercomputers with the power of Arm cores and accelerators inside NVIDIA Bluefield-2 DPUs.

LANL researchers foresee significant performance gains using data processing units  (DPUs) running on NVIDIA Quantum InfiniBand networks. They will pioneer techniques in computational storage, pattern matching and more using BlueField and its NVIDIA DOCA software framework.

Multiple research teams in Europe are accelerating MPI and other HPC workloads with BlueField DPUs. Durham University, in northern England, is developing software for load balancing MPI jobs using BlueField DPUs. 

BlueField DPUs inside Dell PowerEdge servers in the Cambridge Service for Data Driven Discovery offload security policies, storage frameworks and other jobs from host CPUs, maximising the system’s performance.

The Texas Advanced Computing Center (TACC) is the latest to adopt BlueField-2 in Dell PowerEdge servers. It will use the DPUs on an InfiniBand network to make its Lonestar6 system a development platform for cloud-native supercomputing.

More Great AIM Stories

Zinnia Banerjee
Zinnia loves writing and it is this love that has brought her to the field of tech journalism.

Our Upcoming Events

Masterclass, Virtual
How to achieve real-time AI inference on your CPU
7th Jul

Masterclass, Virtual
How to power applications for the data-driven economy
20th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, Virtual
Deep Learning DevCon 2022
29th Oct

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

Will Tesla Make (it) in India?

Tesla has struggled with optimising their production because Musk has been intent on manufacturing all the car’s parts independent of other suppliers since 2017.

Now Reliance wants to conquer the AI space

Many believe that Reliance is aggressively scouting for AI and NLP companies in the digital space in a bid to create an Indian equivalent of FAANG – Facebook, Apple, Amazon, Netflix, and Google.

[class^="wpforms-"]
[class^="wpforms-"]