Listen to this story
|
Google Brain has open sourced Switch Transformer models including 1.6T param Switch-C and the 395B param Switch-XXL in T5X/JAX.
Check out the GitHub repository here.
JAX (Just After eXecution) is a machine/deep learning library developed by DeepMind. All JAX operations are based on XLA or Accelerated Linear Algebra. XLA, developed by Google, is a domain-specific compiler for linear algebra that uses whole-program optimisations to accelerate computing. XLA makes BERT’s training speed faster by almost 7.3 times.
T5X
T5X is a modular, composable, research-friendly framework for high-performance, configurable, and inference of sequence models (starting with language) at many scales.
T5X allows many different classes of language tasks to be expressed uniformly, and a single encoder-decoder architecture can handle them without any task-specific parameters.
Switch Transformer
In deep learning, models usually reuse the same parameters for all inputs. However, the Switch Transformer uses a Mixture of Experts (MoE) and selects different parameters for each incoming example.
The model is primarily used for NLP research. The Transformer uses an algorithm called Switch Routing. Instead of activating several experts and combining the output, it chooses a single expert to work on an input. The algorithm simplifies the routing computation and reduces communication costs since individual expert models are hosted on different GPU devices.