MITB Banner

Automating model parallelism with just one line of code

The difference between these two approaches maps naturally to the heterogeneity of a typical compute cluster.

Researchers from Google, Amazon Web Services, UC Berkeley, Shanghai Jiao Tong University, Duke University and Carnegie Mellon University have published a paper titled “Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning” at OSDI 2022. The paper introduces a new method for automating the complex process of parallelising a model with only one line of code. So how does Alpa work?

Model parallelism

Data parallelism is a technique where model weights are duplicated across accelerators while only partitioning and distributing the training data. The dataset is split into ‘N’ parts in data parallelism with ‘N’ being the quantity of GPUs. The parts are assigned to parallel computational machines after which gradients are calculated for each copy of the model, and then exchanged by all the models. As a result, the values of these gradients are averaged.

In model parallelism, a single model is partitioned across different devices. Here, each model is partitioned into ‘N’ parts, standing for the quantity of GPUs. Each model is placed on an individual GPU. Then, the batch of GPUs is calculated sequentially.

While model parallelism allows training of large models, they are more complex. For example, they need to be specifically designed for target neural networks and compute clusters. In addition, the technique often requires significant effort from system experts to identify an optimal parallelism plan for a specific model. Yet, this is ‘too onerous’ for researchers whose primary focus is to run a model with performance being a secondary priority. This presented the researchers with an oppurtunity to automate model parallelism so that it can easily be applied to large models.

Alpa 

The researchers have proposed a method to automate parallelizing a model with one line of code. Alpa can “transform any JAX neural network into a distributed version with an optimal parallelization strategy that can be executed on a user-provided device cluster.”

It all starts with grouping existing ML parallelization strategies into two categories, inter-operator parallelism and intra-operator parallelism. Inter-operator parallelism assigns distinct operators to different devices that are accelerated with a pipeline execution schedule. Intra-operator parallelism includes data parallelism, operator parallelism, and expert parallelism. Here, individual operators are split and executed on multiple devices. Collective communication is also used to synchronize the results across devices.

“By this categorization, the two parallelisms take place at different granularities of the DL computation and have distinct communication requirements, which happen to match the structure of today’s typical compute clusters,” the paper explained. The team used these properties to design hierarchical algorithms and compilation passes to auto-generate execution plans.

“The difference between these two approaches maps naturally to the heterogeneity of a typical compute cluster,” the team said. Inter-operator parallelism transmits activations between operators on different accelerators, lowering communication bandwidth requirements. However, it suffers from device underutilization given its pipeline data dependency. While intra-operator parallelism is not data-dependent, it requires heavier communication across devices. For instance, in a given GPU cluster, the GPUs within a node have higher communication bandwidth that can accommodate intra-operator parallelism. However, given GPUs across different nodes are connected with much lower bandwidth, inter-operator parallelism is preferred.

The team has leveraged this heterogeneous mapping to design Alpa. It becomes a compiler that conducts various passes when given a computational graph and a device cluster from a user. In the first step, the inter-operator pass slices the computational graph into subgraphs and the device cluster into submeshes. Next, it identifies the efficient way to assign a subgraph to a submesh. Next, the intra-operator pass follows it to identify the best intra-operator parallelism plan for each pipeline stage from the inter-operator pass. Lastly, the runtime orchestration pass generates a static plan to order the computation and communication and execute the distributed computational graph on the actual device cluster.

Alpa was tested on the AWS p3.16xlarge instances with eight 16 GB V100 GPUs, for 64 total GPUs each. It was further examined on the weak scaling results of growing the model size while increasing the number of GPUs for three models. For instance, With GPT, Alpa output a parallelization strategy similar to the one computed by Megatron-ML, the best existing framework, and matches its performance.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Avi Gopani

Avi Gopani

Avi Gopani is a technology journalist that seeks to analyse industry trends and developments from an interdisciplinary perspective at Analytics India Magazine. Her articles chronicle cultural, political and social stories that are curated with a focus on the evolving technologies of artificial intelligence and data analytics.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories