The development of large language generation models is one of the most exciting fields to be in right now as it finds its usage in a diverse range of sectors – better customer service, chatbot and virtual assistance, enhanced gaming experience, improved search engines, etc. Big names such as Meta, Google, Microsoft, and NVIDIA are investing time, energy and money in building large language generation models. Innovation leader DeepMind, which has had path-breaking innovations like Alpha Fold, Alpha Fold 2.0, and Enformer in the past, has also come out with something amazing in the language model space. It has introduced a 280 billion parameter transformer language model called Gopher.
Outperforms GPT-3?
DeepMind’s research went on to say that Gopher almost halves the accuracy gap from GPT-3 to human expert performance and exceeds forecaster expectations. It stated that Gopher lifts performance over current state-of-the-art language models across roughly 81% of tasks containing comparable results. This works notably in knowledge-intensive domains like fact-checking and general knowledge.
DeepMind said that larger models are more likely to generate toxic responses when provided with toxic prompts. They can also more accurately classify toxicity. The model scale does not significantly improve results for areas like logical reasoning and common-sense tasks. The research team found out that the capabilities of Gopher exceed existing language models for a number of key tasks. This includes the Massive Multitask Language Understanding (MMLU) benchmark, where Gopher demonstrates a significant advancement towards human expert performance over prior work.
Image: DeepMind
Along with Gopher, DeepMind has also released two other papers. One deals with the study of ethical and social risks associated with large language models, and the second investigates a new architecture with better training efficiency.
Today we're releasing three new papers on large language models. This work offers a foundation for our future language research, especially in areas that will have a bearing on how models are evaluated and deployed: https://t.co/TV05K4zptv 1/ pic.twitter.com/SyWb8qIDk0
— DeepMind (@DeepMind) December 8, 2021
Gopher Explained
In a lengthy 118-page paper, DeepMind deep dives into what Gopher actually is. The research paper added that DeepMind trained the Gopher family of models on MassiveText, which is a collection of large English-language text datasets from diverse sources such as web pages, books, news articles, and code. The data pipeline includes text quality filtering, removal of repetitious text, deduplication of similar documents, and removal of documents with significant test-set overlap. They found out that successive stages of this pipeline improve language model downstream performance, emphasising the importance of dataset quality.
MassiveText contains 2.35 billion documents or about 10.5 TB of text. The research team added, “Since we train Gopher on 300B tokens (12.8% of the tokens in the dataset), we subsample from MassiveText with sampling proportions specified per subset (books, news, etc.) We tune these sampling proportions to maximise downstream performance.
The War of the Large Language Models
2021 has been a revolutionary year for the development of large language models.
We all know how path-breaking San Francisco-based artificial intelligence research laboratory Open AI’s GPT-3 autoregressive language model is in the field of language generation models. Launched last year, GPT -3’s full version has a capacity of a massive 175 billion machine learning parameters. Other tech giants have also been paying attention to this field and stepping up their game. AI21 Labs released Jurassic-1, which has 178 billion parameters. Gopher is larger than both of them and stands at a whopping 280 billion parameters.
But, it is definitely not the largest. Microsoft and NVIDIA teamed up earlier this year to bring out the Megatron-Turing Natural Language Generation (MT-NLG) model with an astounding 530 billion parameters. Google has developed and benchmarked Switch Transformers, a technique to train language models, with over a trillion parameters. The Chinese government-backed Beijing Academy of Artificial Intelligence (BAAI) has introduced Wu Dao 2.0 with 1.75 trillion parameters.
Who Wins the Race?
In the research paper, DeepMind tries to draw a comparison between Gopher and the models that exist. It is said that Gopher outperforms the current state-of-the-art for 100 tasks (81% of all tasks). The baseline model includes large language models such as GPT-3 (175 billion parameters), Jurassic-1 (178B parameters), and Megatron-Turing NLG (530 billion parameters). They found that Gopher showed the most uniform improvement across reading comprehension, humanities, ethics, STEM and medicine categories. It also displayed a general improvement on fact-checking. The general trend is less improvement in reasoning-heavy tasks (say, Abstract Algebra) and a larger and more consistent improvement in knowledge-intensive tests (say, General Knowledge).
Image: DeepMind’s Scaling Language Models: Methods, Analysis & Insights from Training Gopher
For language model benchmarks, the firm expand the relative performance results of Gopher versus the current 178B SOTA model Jurassic-1 and 175B GPT-3. Gopher does not outperform the state-of-the-art on 8 of 19 tasks; under-performs on Ubuntu IRC and DM Mathematics in particular. This may be due to a poor tokeniser representation for numbers. Gopher demonstrates improved modelling on 11 of 19 tasks, in particular books and articles. It is said that this can happen due to the heavy use of book data in MassiveText (sampling proportion of 27% compared to 16 per cent in GPT-3).
Image: DeepMind’s Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Too early to know how impactful Gopher can be
Just like the massive buzz GPT-3 created around its launch, Gopher has done the same.
BREAKING: DeepMind introduces a 280B parameter language model named Gopher. Investigates its effectiveness in reading comprehension & other complex tasks such as logical reasoning. Ethics & risks are also discussed.
— elvis (@omarsar0) December 8, 2021
article: https://t.co/g9uBvRhqxt
paper: https://t.co/4yR8l8yubt pic.twitter.com/vsfadzM3XR
DeepMind's 230 billion parameter Gopher model sets a new state-of the-art on our benchmark of 57 knowledge areas.
— Dan Hendrycks (@DanHendrycks) December 8, 2021
They also claim to have a supervised model that gets 63.4% on the benchmark's professional law task–in many states, that's accurate enough to pass the bar exam! https://t.co/5asjgpTW8O pic.twitter.com/t29pWPsDy6
But GPT-3, described as revolutionary by some, was criticised as well by well-known tech leaders. We are yet to see if Gopher will draw such kind of criticisms from the tech world. It is too early to tell at the moment as the model has just been introduced.
As more and more large language tools are getting developed, the need of the hour is faster development of interpretability tools and data quality to understand the models better. Only then the benefits of such models can be used for the benefit of the society.