AIM Banners_978 x 90

NVIDIA Unveils ‘Rubin’ Months Ahead of Blackwell Release, AMD Announces MI400X

Both companies have roadmaps till 2026 for AI accelerators.
NVIDIA Unveils ‘Rubin’ Months Ahead of Blackwell Release, AMD Announces ‘Turin’ to Compete

At Taipei’s Computex Conference, NVIDIA CEO Jensen Huang announced the launch of the Rubin AI chip platform, slated for 2026, and the Blackwell Ultra chip, expected in 2025, marking a shift to an annual update cycle for NVIDIA’s AI accelerators.

The Rubin architecture follows the March announcement of the Blackwell model, which is set to ship later in 2024. “We are seeing computation inflation,” Huang stated, highlighting the need for accelerated computing to manage the growing data processing demands. He emphasised NVIDIA’s technology, which promises 98% cost savings and 97% less energy consumption.

Previously, NVIDIA had a two-year update timeline for its AI chips. The shift to an annual release schedule underscores the competitive intensity in the AI chip market and NVIDIA’s efforts to maintain its leadership. The Rubin platform will feature new GPUs and a central processor named Vera, although details were scarce.

Huang announced that the forthcoming Rubin AI platform will incorporate HBM4, the next generation of high-bandwidth memory. This memory type has become a bottleneck in AI accelerator production due to high demand, with leading supplier SK Hynix Inc. largely sold out through 2025. Huang did not provide detailed specifications for the Rubin platform, which is set to succeed Blackwell.

AMD Focusing on AI Workloads

Not just NVIDIA, during the opening keynote at Computex 2024, AMD Chair and CEO Lisa Su showcased the growing momentum of the AMD Instinct accelerator family. AMD unveiled a multiyear, expanded AMD Instinct accelerator roadmap, introducing an annual cadence of leadership AI performance and memory capabilities.

In 2026, AMD plans to release the AMD Instinct MI400 series, based on the AMD CDNA “Next” architecture, which will provide the latest features and capabilities to enhance performance and efficiency for AI training and inference.

Previewed at Computex, the 5th Gen AMD EPYC processors, codenamed “Turin”, will utilise the “Zen 5” core, continuing the high performance and efficiency of the AMD EPYC processor family. These processors are expected to be available in the second half of 2024.

The roadmap begins with the AMD Instinct MI325X accelerator, set to be available in Q4 2024. This accelerator will feature 288GB of HBM3E memory and 6 terabytes per second of memory bandwidth, using the same Universal Baseboard design as the MI300 series. It boasts industry-leading memory capacity and bandwidth, being 2x and 1.3x better than the competition, respectively, and offering 1.3x better compute performance.

Following this, the AMD Instinct MI350 series, powered by the new AMD CDNA 4 architecture, is expected in 2025. It promises up to a 35x increase in AI inference performance compared to the MI300 series with CDNA 3 architecture. 

The AMD Instinct MI350X accelerator will be the first product in this series, utilising advanced 3 nm process technology, supporting FP4 and FP6 AI data types, and including up to 288 GB of HBM3E memory.

📣 Want to advertise in AIM? Book here

Picture of Mohit Pandey
Mohit Pandey
Mohit writes about AI in simple, explainable, and often funny words. He's especially passionate about chatting with those building AI for Bharat, with the occasional detour into AGI.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed