Advertisement

How to train compute optimal large language models?

New research from DeepMind attempts to investigate the optimal model size and the number of tokens for training a transformer language model under a given compute budget.
compute optimal large language

New research from DeepMind attempts to investigate the optimal model size and the number of tokens for training a transformer language model under a given compute budget. The team trained over 400 language models ranging from 70 million to 16 billion parameters on 5-500 billion tokens. The team found that for compute optimal training, the model size and the number of training tokens should be scaled equally. This means that the doubling of model size should be accompanied by the doubling of the training tokens.

Rise of large language models

This is truly the age of large language models. When GPT-3 model was introduced, it caught the fancy of the research world – 175 billion parameters were unheard of. It’s been two years since its introduction, and in this time, several models have been launched, each larger than the previous one. The large autoregressive transformers display impressive performance on many tasks by using various evaluation protocols like zero-, few-shot, and fine-tuning.

This impressive performance comes at the cost of massive compute and energy requirements. This has been a subject of much debate. The negative implications of such large models have been raised time and again; one popular example is AI researcher Timnit Gebru who was ousted from Google allegedly because of a paper she co-authored, which spoke about the downside of building, maintaining, and training such massive models.

The research

The training compute budget is often calculated in advance. Since it is feasible to train these large models not more than once, it becomes very critical to accurately estimate the best model hyperparameters for a given compute budget. In the past, it has been proved that there exists a power-law relationship between the number of parameters and the performance of an autoregressive language model. 

An earlier study showed that large models should not be trained to their lowest possible loss to be compute optimal. While the DeepMind researchers have concluded the same in their recent study, they also estimate that the large models must be trained for many training tokens than recommended earlier. The previous study showed that for a ten times increase in the computational budget, the size of the model should increase 5.5 times, and the number of training tokens must increase by 1.8 times. However, the DeepMind study shows that the model size and the number of training tokens should scale in equal proportions.

Based on the estimated compute-optimate frontier, the DeepMind researchers predicted that for training Gopher (280 billion parameter language model), an optimal model should be four times smaller and should be trained on four times more tokens. This was verified by training a compute-optimal 70 billion model called Chinchilla on 1.4 trillion tokens. The researchers could show that Chincilla outperformed its larger counterpart, Gopher, and also reduced the inference cost considerably (due to reduced model size), which facilitates downstream uses on smaller hardware. The benefits of a more optimally trained smaller model extend beyond the immediate benefits of its improved performance.

Need for quality datasets

The DeepMind research calls for an increased focus on dataset scaling, which in turn is only beneficial when the data is of high quality. “Larger datasets will require extra care to ensure train-test set overlap is properly accounted for, both in the language modelling loss but also with downstream tasks,” the authors wrote.

Apart from this, the research community must also account for the ethical and privacy concerns associated with such large models. As observed in the past, the large datasets collected from the web contain toxic language, bias, and other private information. A better understanding of large language model performance and its interaction is an important future research area.

Read the full paper here.

Download our Mobile App

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Bangalore

Future Ready | Lead the AI Era Summit

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

20th June | Bangalore

Women in Data Science (WiDS) by Intuit India

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR