Download our Mobile App
In an attempt to further unlock the immense potential of artificial intelligence for supercomputing applications, with unprecedented speed and performance, NVIDIA launched an 80GB version of A100 GPU. Acting as a critical element in NVIDIA HGX AI supercomputing platform, this new chip is built on NVIDIA’s Ampere architecture and comes with twice the memory power of its predecessor, launched this year May.
The chip was unveiled at this year’s annual SC20 Supercomputing Conference with an aim to help businesses make quick decisions based on real-time data analysis. Explaining better Bryan Catanzaro, the VP of applied deep learning research at NVIDIA stated on the company’s official release that high-performance computing comes with certain memory and bandwidth challenges to achieve accurate results. Thus, the company has launched A100 80GB GPU with 2TB/s memory bandwidth using Samsung’s HBM2e, which will enable researchers to advance AI applications faster.
As a matter of fact, this new release created a massive buzz amid the industry, with systems providers like Atos, Dell, Fujitsu, HPE, Lenovo, and many others planning to offer systems with the 80GB version by the first half of the coming year, stated the official release. However, competing with the likes of AMD’s new Instinct MI100 accelerator chips and Graphcore’s second-gen AI chips, can this new release extend NVIDIA’s lead on MLPerf benchmark for AI performance? Let’s delve deeper.
NVIDIA Doubling Down On Capacity & Capability
While NVIDIA’s A100 is already the preferred GPU for high-performance computing — by doubling its 40GB version, NVIDIA’s A100 80GB aims to achieve new heights of supercomputing performance. Not only the latest version is equipped with third-generation tensor cores but also comes with high-bandwidth memory that increases the per isolated memory. This, in turn, can deploy seven multi-instance GPUs with 10 GB each. Alongside, the third-geo NVLink and NVSwitch capabilities enable extra GPU-to-GPU bandwidth than other versions of NVIDIA chips.
Further to this, the new version by NVIDIA also claims to have the ability to address massive pre-trained models within its single HGX-powered server, delivering 1.25x higher AI inferencing performance in production. This chip has also been considered ideal for handling larger datasets with 2x big data analytics performance. On the other hand, the 80GB version chip also produces 2x acceleration for scientific applications like quantum chemistry and weather forecasting, and 3x improvement in AI training by DLRM Recommender. The chip brings in the power of NVIDIA A100 Tensor Core GPUs, NVLink, and NVSwitch with its entire software stack for enhancing the application performance to the highest.
This might sound excessive, but with such massive capabilities, NVIDIA has outsmarted not only its previous versions but also AMD’s accelerator chips — Instinct MI100 on MLPerf Benchmarking. Although AMD’s newly launched AI chip, in terms of floating-point performance benchmark, achieves 18% better than A100 40GB, the 80GB comes with a much better performance score, stated research analyst Karl Freund to the media. In fact, NVIDIA comes with a total package of AI supercomputing with massive improvements in data analytics, deep learning as well as modelling and simulation.
Stay ConnectedGet the latest updates and relevant offers by sharing your email.
While AMD’s MI100 is a big step forward towards high-performance computing, NVIDIA’s 80GB A100 with its 3x speed up has undoubtedly upped the ante of the HPC market. However, a few experts believe that coming to the price perspective, AMD can have the upper hand over NVIDIA’s A100’s new version, which will probably come up with a jacked-up price.
NVIDIA’s Additional Releases
The newly released GPU — 80GB A100 — has already been deployed in NVIDIA’s new DGX Station A100 — the one-of-its-kind workgroup server that allows AI computing on desktops. Also known as a datacenter in a box, this new workstation claims to deliver 2.5 petaflops of AI performance, and has up to 320GB of GPU memory. Teams of data science and AI researchers from domains like education, BFSI, government, healthcare, as well as retail can leverage DGX Station A100 from “this quarter.”
Along with the AI workstation that is equipped with NVIDIA’s new advanced chip, the company has also launched the latest version of Infiniband — Mellanox 400G Infiniband for exascale AI computing applications. According to the official release, this newly launched Mellanox InfiniBand came a long way since its first generation with data throughput of 400 gigabits per second and in-network computing engines for added acceleration. This will further enable faster networking for supercomputers as well as self-driving cars.
NVIDIA’s advanced capabilities with its GPUs for high-performance computing has indeed created a benchmark for its competitors. And with this newly launched AI chip, the company has showcased its immense persistence towards artificial intelligence and innovation. NVIDIA’s announcement came right ahead of this quarter’s earning report, which will, in turn, enhance its revenue growth.
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
Sejuti currently works as Senior Technology Journalist at Analytics India Magazine (AIM). Reach out at firstname.lastname@example.org