Intel reveals Gaudi 2 AI training engine to challenge NVIDIA

The new Gaudi2 and Greco processors are purpose-built for AI deep learning applications, implemented in 7-nanometer technology and manufactured on Habana's high-efficiency architecture.

Advertisement

Intel announced at Intel Vision 2022 that Habana Labs has launched its second-generation deep learning processors for training and inference – Habana Gaudi 2 and Habana Greco. Earlier, Intel had estimated that the total addressable market (TAM) for AI silicon by 2024 will be greater than USD 25 billion. In that, the AI silicon in the data centre is expected to be greater than USD 10 billion by 2024. Intel is clearly taking its AI strategy very seriously and clearly wants to capture the opportunities that exist in this space. 

Intel said that the new processors provide customers with high-performance as well as deep learning compute choices for both training workloads and inference deployments in the data centre while lowering the AI barrier to entry for companies of different sizes. Intel also revealed details for the Arctic Sound-M server GPU that will see its debut in systems in the third quarter of this year.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

The name behind this product, Habana Labs, is an Israel-based developer of programmable deep learning accelerators for the data centres, which was acquired by Intel in 2019 for approximately USD 2 billion.

Large and complex datasets

“We have a broad range of solutions that address the broad capabilities that customers require, but with Gaudi processors, we will be able to address the biggest of deep learning training use cases. We see demand growing for these kinds of applications to deploy implementations of object detection or NLP. We have to train increasingly large and complex datasets, and this can be very time and cost-intensive. With Gaudi 2, we are able to train those models much more effectively,” said Sandra Rivera, Intel executive vice president and general manager of the data centre and AI Group, during the event.

Intel also announced plans to add several new IPUs to its range through 2026. Introduced last year, IPUs are specialised chips that can offload tasks from a server’s CPU, which can help increase processing capacity.

Intel vs NVIDIA war is on

The Gaudi2 and Greco processors implement 7-nanometer technology. Habana Labs asserted that Gaudi2’s training throughput performance for the ResNet-50 computer vision model and the BERT natural language processing model delivers twice the training throughput over the NVIDIA A100-80 GB GPU.

As per a Reuters report, Chief Business Officer at Habana Labs Eitan Medina added that CUDA is not a moat that NVIDIA can really stand on for long. He added that Intel’s software platform is open standard and free to download and use from GitHub.

Image: Intel

Medina also added that if we compare A100 GPU and Gaudi2 (both implemented in the same process node and roughly the same die size), the latter has a clear leadership training performance. “This deep-learning acceleration architecture is fundamentally more efficient and backed with a strong roadmap,” he added.

Gaudi2 also debuts an integrated media processing engine for compressed media and offloading the host subsystem. Intel said that Gaudi2 triples the in-package memory capacity from 32 GB to 96 GB of HBM2E at 2.45TB/sec bandwidth, as well as integrates 24 x 100GbE RoCE RDMA NICs, on-chip, for scaling-up and scaling-out using standard Ethernet.

Late to the game? H100 is already coming from NVIDIA

NVIDIA, too, is packing its GPU capabilities with more advancements. During NVIDIA GTC 2022, held sometime back, Jensen Huang announced the Hopper GPU microarchitecture and H100 GPU as the successor of NVIDIA Ampere architecture (Ampere came two years back). He called H100 the engine of the world’s AI infrastructure that enterprises will use to accelerate their AI-driven businesses.

H100 builds on the A100 model with improvements in architectural efficiency. NVIDIA H100 can be deployed across data centre types, such as on-premises, cloud, hybrid-cloud and edge, claimed the tech mammoth. It will be made available globally in the latter part of this year from cloud service providers as well as NVIDIA.

More Great AIM Stories

Sreejani Bhattacharyya
I am a technology journalist at AIM. What gets me excited is deep-diving into new-age technologies and analysing how they impact us for the greater good. Reach me at sreejani.bhattacharyya@analyticsindiamag.com

Our Upcoming Events

Conference, in-person (Bangalore)
MachineCon 2022
24th Jun

Conference, Virtual
Deep Learning DevCon 2022
30th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MORE FROM AIM
Amit Raja Naik
Oh boy, is JP Morgan wrong?

The global brokerage firm has downgraded Tata Consultancy Services, HCL Technology, Wipro, and L&T Technology to ‘underweight’ from ‘neutral’ and slashed its target price by 15-21 per cent.