NVIDIA has once again hit a record-setting performance in MLPerf. This is the third consecutive time the US computer graphics giant has set performance and energy efficiency records on inference tests from MLCommons, an industry benchmarking ground formed in May 2018.
According to benchmarks released earlier this month, NVIDIA has delivered the best artificial intelligence (AI) inference results using x86 or Arm-based CPUs. This is the first time the data-centre category tests have run on an Arm-based system, giving users more choice in deploying AI and machine learning models.
Previously, NVIDIA had set 16 performance records with eight on a per-chip basis and eight at-scale training in the commercially available solutions. In addition, it had submitted its training for all eight benchmarks, where it had improved up to 2.1x on a chip-to-chip basis and up to 3.5x at scale.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
In the latest round (MLPerf v1.1), NVIDIA topped all seven performance tests of inference with systems from NVIDIA and nine of its ecosystem partners, including Alibaba, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Inspur, Lenovo, Nettrix and Supermicro.
Here are some of the highlights of NVIDIA in the latest MLPerf:
Download our Mobile App
- Its end-to-end AI platform hit up to 50 per cent more performance in a year from software improvements
- NVIDIA topped MLPerf data centre benchmarks for A100 up to 104x faster than CPU
- NVIDIA topped MLPerf edge benchmarks by offering leadership performance for edge servers
- NVIDIA simplified inference serving with Triton
- NVIDIA A100 delivered 95 per cent of the performance of a single Multi-instance GPU (MIG) instance running alone
For those unaware, MLPerf’s inference benchmarks are based on today’s most popular AI/ML workloads and scenarios, covering computer vision, medical imaging, natural language processing, recommendation systems, reinforcement learning and more.
The table below shows the benchmarks submitted by NVIDIA in the latest round of MLPerf:
Why performance matters
As AI/ML use cases expand from the data centre to the edge and beyond, machine learning models and datasets continue to evolve. That is where the performance that is both dependable and flexible to deploy becomes crucial.
MLPerf gives users the ability to make informed buying decisions based on these performances. It is backed by dozens of industry leaders, including Arm, Alibaba, Baidu, Google, Intel and NVIDIA.
Arm Debuts in latest MLPerf
Arm architecture is making headway into data centres worldwide, thanks to its energy efficiency, performance increases, and expanding software ecosystem. The latest benchmarks show that Arm-based servers using Ampere Altra CPUs deliver near-equal performance to similarly configured x86-based servers for AI inference tasks.
In one of the experiments, the Arm-based server out-performed a similar x86 system. “We have a long tradition of supporting every CPU architecture, so we are proud to see Arm prove its AI prowess in a peer-reviewed industry benchmark,” said the NVIDIA team.
Arm’s senior director of HPC and tools, David Lecomber, said: “The latest inference results demonstrate the readiness of Arm-based systems powered by Arm-based CPUs and NVIDIA GPUs for tackling a broad array of AI workloads in the data centre.”
NVIDIA Ecosystem
Today, NVIDIA is backed by a large and growing ecosystem of technology platforms. For example, in the latest benchmark, seven OEMs have submitted a total of 22 GPU-accelerated platforms. Most of these server models are NVIDIA-certified and are validated for running a diverse range of accelerated workloads and scenarios. In addition, the NVIDIA team said that many of them support its NVIDIA AI Enterprise, a software officially released last month.
Some of its partners who participated in this round of MLPerf include Dell Technologies, Fujitsu, Hewlett Packard Enterprise, Inspur, Lenovo, Nettrix and Supermicro, and cloud-service provider Alibaba.
NVIDIA goes all-in
NVIDIA takes great pride in its software stack. For inference, it includes pre-trained AI models for a wide variety of use cases. Then, using transfer learning, the NVIDIA TAO Toolkit customises those models for specific applications.
NVIDIA said that their NVIDIA TensorRT software optimises AI models as they make the best use of memory and run faster. “We routinely use it for ‘MLPerf’ tests, and it is available for both x86 and Arm-based systems,” said the team.
The team also employed its NVIDIA Triton inference server software and Multi-instance GPU (MIG) capability in the latest benchmarks.
Due to the continuous improvements in this software stack, in the latest MLPerf, NVIDIA achieved gains up to 20 per cent in performance and 15 per cent in energy efficiency from previous MLPerf inference benchmarks in four months.
The software used in the latest test is available from the MLPerf repository. In addition, NVIDIA said it continually adds this code into its deep learning frameworks and containers available on NGC, its software hub for GPU applications.