AWS launches all-new GPU-based instances for ML training and HPC

The all-new P4de instances are 2x higher than current GPUs.
AWS launches all-new GPU-based instances for ML training and HPC
Listen to this story

AWS today announced that it would be allowing the preview of GPU-based instances, Amazon EC2 P4de instances. The GPU-based instances provide high performance for machine learning training (ML) and high-performance computing (HPC) applications. This includes object detection, NLP, semantic segmentation, recommender systems, seismic analysis, etc. 

Inside P4de instances 

Powered by 8 NVIDIA A100 GPUs with 80 GB ‘high-performance’ HBM2e GPU memory, the P4de instances are 2x higher than current GPUs. The new P4de instances provide 640 GB of GPU memory, which provides up to 60 per cent better ML training performance and 20 per cent lower cost to train compared to the current P4d instances. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

The benefits 

The company said that the improved performance would allow customers to reduce model training times and accelerate time to market faster. Also, increased GPU memory will benefit workloads that need to train on large datasets of high-resolution data. 

Where is it available? 

Currently, P4de instances are available in the AWS US East and US West regions. It is available in the p4de.24xl size, providing 96 vCPUs, 8 NVIDIA A100-80 GB GPUs, 1.1 TB system memory, 8 TB local NVMe-based SSD storage, 19 Gbps EBS bandwidth, and 400 Gbps networking bandwidth with EFA and GPUDirect RDMA. 

P4de instances are deployed in EC2 UltraClusters, which provides petabit-scale non-blocking networking infrastructure and high-throughput, low-latency storage through FSx for scale-out HPC and ML training applications. EC2 UltraClusters is one of the most powerful supercomputers in the world. It combines high-performance computing, networking, and storage. 

(Source: AWS)

What to expect? 

Last year, AWS announced three new Amazon EC2 instances powered by AWS-designed chips at the AWS re:Invent. These chips were said to help customers significantly improve the performance, cost, and energy efficiency of their workloads running on Amazon EC2 instances. 

With the new P4de instances, the team believes that it will continue to add to the industry’s widest portfolio of accelerated compute instances, featuring platforms powered by their silicon and by accelerators from their partners, to provide the highest performing NVIDIA GPUs for customers to build, train, and deploy machine learning at scale.

More Great AIM Stories

Amit Raja Naik
Amit Raja Naik is a seasoned technology journalist who covers everything from data science to machine learning and artificial intelligence for Analytics India Magazine, where he examines the trends, challenges, ideas, and transformations across the industry.

Our Upcoming Events

Masterclass, Virtual
How to achieve real-time AI inference on your CPU
7th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, Virtual
Deep Learning DevCon 2022
29th Oct

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR
[class^="wpforms-"]
[class^="wpforms-"]