Google Launches A Tool That Can Scale and Parallelize Neural Networks

GSPMD separates programming an ML model from parallelization and is capable of scaling most deep learning network architectures

Google AI has launched GSPMD – General and Scalable Parallelization for ML Computation Graphs, to address scaling challenges. GSPMD is capable of scaling most deep learning network architectures and has been applied to many deep learning models which include GShard-M4, BigSSL, LaMDA, ViT, and MetNet-2. GSPMD has also been integrated into multiple ML frameworks, including TensorFlow and JAX, which use XLA as a shared compiler.

The solution separates the task of programming an ML model from the challenge of parallelization. It allows model developers to write programs as if they were run on a single device with very high memory and computation capacity. The user only needs to add a few lines of annotation code to a subset of critical tensors in the model code to indicate how to partition the tensors. With GSPMD, developers may employ different parallelism algorithms for different use cases without the need to reimplement the model.

The separation of model programming and parallelism allows developers to minimize code duplication. GSPMD is designed to support a large variety of parallelism algorithms with a uniform abstraction and implementation. It also supports nested patterns of parallelism. The solution facilitates innovation on parallelism algorithms by allowing performance experts to focus on algorithms that best utilize the hardware, instead of the implementation that involves lots of cross-device communications.

In the recent MLPerf set of performance benchmarks, a BERT-like encoder-only model with ~500 billion parameters to which the team applied GSPMD for parallelization over 2048 TPU-V4 chips, yielded highly competitive results, utilizing up to 63% of the peak FLOPS that the TPU-V4s offer. As a shared, robust mechanism for different parallelism modes, GSPMD allows users to conveniently switch between modes in different parts of a model. This is especially valuable for models that may have different components with distinct performance characteristics, like multimodal models that handle both images and audio.

“As this often requires building larger and even more complex models, we are pleased to share the GSPMD paper and the corresponding open-source library to the broader research community, and we hope it is useful for efficient training of large-scale deep neural networks,” wrote Yuanzhong Xu and Yanping Huang, Software Engineers; Google Research, Brain Team, in the blog post.

Download our Mobile App

Meeta Ramnani
Meeta’s interest lies in finding out real practical applications of technology. At AIM, she writes stories that question the new inventions and the need to develop them. She believes that technology has and will continue to change the world very fast and that it is no more ‘cool’ to be ‘old-school’. If people don’t update themselves with the technology, they will surely be left behind.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox