MITB Banner

After Flex, Intel’s on Max Mode to Boost Cloud Infrastructure

Listen to this story

Intel made a late entry into the GPU scene with the Intel Arc, but has since then accelerated innovation. In August, the company released its first Data Centre GPU Flex Series and last week, it came out with the Data Center GPU Max Series. 

Data Center GPU Max Series is the industry’s highest-density processor, packing over 100 billion transistors on 47 active tiles and up to 128 Xe-HPC cores. The product maximises bandwidth (up to 128GB HBM2e), capacity (up to 408MB Rambo L2 Cache), and memory (up to 64MB L1 cache). 

Intel has claimed that its 408MB L2 cache will be able to deliver 2x the performance than the previous versions. 

The GPU solves a series of obstacles, such as one around porting and refactoring code, the economic and technical burdens around proprietary GPU environments that prohibit portability between different GPU vendors, and finally, the inconsistencies between CPU and GPU implementations like CPU having too little memory bandwidth, and GPU too little memory capacity.  

Intel’s GPU Max is a product that maximises bandwidth, compute power, developer productivity, and impact. Above that, the entire Max series—that is, both its CPU and GPU—is powered by oneAPI, an open-source programming model that allows developers to use various accelerated architectures. 

The Data Centre Max Series products are planned to be launched in 2023. 

Data Center GPU Flex Series 

The previous version of the model Data Centre GPU Flex Series was made to manage media streaming and cloud gaming, along with supporting AI visual inference and virtual desktop infrastructure workloads. The device could be set in different power levels for requirements ranging from basic AI needs to complex AI workloads. 

Two weeks ago, Intel announced that its data centre GPU Flex Series has been added to their family of PluggableDevices – aka Intel Extension for TensorFlow. PluggableDevice architecture offers a plugin mechanism for registering devices with TensorFlow without the need to make changes in the original code. 

This new implementation will allow Intel Data Center GPU Flex Series hardware and the company’s Intel Arch graphics. However, it is said to be compatible with Linux and the Windows Subsystem for Linux by connecting to oneAPI.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Ayush Jain

Ayush Jain

Ayush is interested in knowing how technology shapes and defines our culture, and our understanding of the world. He believes in exploring reality at the intersections of technology and art, science, and politics.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories