Google has introduced the Generalist Language Model (GLaM). It is a trillion weight model that uses sparsity. It not only makes it more efficient in terms of training and serving, but it also achieves a competitive advantage on multiple few-shot learning tasks. In terms of performance, GLaM demonstrates improved learning efficiency across 29 public NLP benchmarks in seven categories like language completion, open domain question answering, and inference tasks.
Over the past few years, leading AI institutes and tech companies have been releasing several language models – each bigger and more advanced than the previous. GPT-3’s launch was no less than a watershed moment in this space – never had the world seen such a large model with 175B parameters. GPT-3 and other similar models can perform tasks like few-shot learning across a wide array of tasks, including reading comprehension and question answering with very few or no training examples with much ease.
That said, this innovation and superior performance come at a cost. They are computationally intensive and have adverse effects on the environment. Researchers are now working to develop models that can be trained and used more efficiently.
To build GLaM, Google’s team built a high-quality 1.6 trillion token dataset that contains language usage representative of a wide range of use cases For each token, the gating network selects the two most appropriate experts to process the data. The full version of GLaM has 1.2 trillion total parameters across 64 experts per MoE layer with 32 MoE layers in total, but only activates a subnetwork of 97 billion (8% of 1.2 trillion) parameters per token prediction during inference. Each input token is dynamically routed to two selected expert networks out of 64 for prediction.
It is a mixture of experts (MoE) model, which means that it has different submodels that are specialised for different inputs. Experts in each layer are controlled by a gating network. They activate experts based on the input data. The gating network selects two most appropriate experts to process the data for each token. The full version of GLaM has 1.2 trillion total parameters across 64 experts per MoE layer with 32 MoE layers in total, but only activates a subnetwork of 97 billion (8% of 1.2 trillion) parameters per token prediction during inference. Compared with Megatron-Turing model, GLaM is on-par on the seven respective tasks if using a 5 percent margin, while using 5x less computation during inference.