Listen to this story
Check out the GitHub repository here.
Sign up for your weekly dose of what's up in emerging technology.
JAX (Just After eXecution) is a machine/deep learning library developed by DeepMind. All JAX operations are based on XLA or Accelerated Linear Algebra. XLA, developed by Google, is a domain-specific compiler for linear algebra that uses whole-program optimisations to accelerate computing. XLA makes BERT’s training speed faster by almost 7.3 times.
T5X is a modular, composable, research-friendly framework for high-performance, configurable, and inference of sequence models (starting with language) at many scales.
T5X allows many different classes of language tasks to be expressed uniformly, and a single encoder-decoder architecture can handle them without any task-specific parameters.
In deep learning, models usually reuse the same parameters for all inputs. However, the Switch Transformer uses a Mixture of Experts (MoE) and selects different parameters for each incoming example.
The model is primarily used for NLP research. The Transformer uses an algorithm called Switch Routing. Instead of activating several experts and combining the output, it chooses a single expert to work on an input. The algorithm simplifies the routing computation and reduces communication costs since individual expert models are hosted on different GPU devices.