Active Hackathon

GPT-Neo: The Open-Source Cure For GPT-3 FOMO

GPT-3 was the largest language model when OpenAI released it last year. Now, Google Brain’s 1.6 trillion parameters language model (as opposed to GPT-3’s 175 billion parameters) has replaced GPT-3 as the largest language model. Both GPT-3 and Google Brain model are transformer-based neural networks.

Developers across the world were waiting for the arrival of GPT-3 with bated breath. But much to their disappointment, and a departure from their earlier signals, OpenAI exclusively sold the source code of GPT-3 to Microsoft. Interestingly, GPT and GPT-2 were open-source projects.


Sign up for your weekly dose of what's up in emerging technology.

Ever since, the internet hive mind, developers and researchers ached for an open-source version of GPT-3. Now, it looks like their dream has finally come true.

Enter GPT-Neo, the brainchild of EleutherAI.

Connor Leahy, Leo Gao, and Sid Black founded EleutherAI in July 2020. It is a decentralized grassroots collective of volunteer developers, engineers, and researchers focused on AI alignment, scaling, and open-source AI research.

According to EleutherAI’s website, GPT‑Neo is the code name for a family of transformer-based language models loosely styled around the GPT architecture. The stated goal of the project is to replicate a GPT‑3 DaVinci-sized model and open-source it to the public, for free. 

GPT‑Neo is an implementation of model & data-parallel GPT‑2 and GPT‑3-like models, utilizing Mesh Tensorflow for distributed support. The codebase is optimized for TPUs, but also work on GPUs. Interestingly, Leahy had earlier attempted to replicate GPT-2 through Google’s Tensorflow Research Cloud (TFRC) program, which worked out to their advantage while working on GPT-Neo.

The researchers have now announced the release of two mid-sized models in their GPT-Neo library, pre-trained using 1.3 billion and 2.7 billion parameters, a far cry from GPT-3’s 175 billion parameter but in the ballpark of GPT-2 with 1.5 billion parameters.

An EleutherAI researcher tweeted:


GPT-Neo: A GPT-3-Sized Model

The researchers are working on two repositories — GPT-Neo (for training on TPUs) and GPT-NeoX (for training on GPUs)

The original codebases for GPT-Neo were built on TPUs, Google’s custom AI accelerator chips. The EleutherAI team realised even the generous amount of TPUs provided through TFRC wouldn’t be sufficient to train GPT-Neo. The research work got a huge boost when CoreWeave, a US-based cryptocurrency miner, approached EleutherAI. The former offered EleutherAI team access to its hardware in exchange for an open-source GPT-3-like model. No money changed hands as part of the deal.

While the model released is quite similar to GPT-3, it will differ in terms of the training dataset, refined by extensive bias analysis. The dataset, called The Pile, is an 835GB corpus consisting of 22 smaller datasets combined to ensure broad generalisation abilities. GPT-Neo was trained on The Pile, the weights and configs of which can be freely downloaded here

With the GPT-Neo implementation, researchers were able to make GPT-2 and GPT-3-like models while scaling them up to GPT3 sizes and more, using the mesh-TensorFlow library. 

It includes alternative model architectures and linear attention implementations that should enable scaling up to even larger model sizes & context lengths, including Local attention, Linear attention, Mixture of Experts, Axial Positional embedding and Masked Language Modelling. 

Way Forward

Researchers are now focusing on the development of the GPT-NeoX library using the hardware and compute-capabilities provided by Coreweave. EleutherAI is currently waiting for CoreWeave to finish building the final hardware for training. GPT-Neox will offer features such as 3D parallelism, model structuring and straightforward configuration using various files. 

GPT-NeoX is under active development and will be based on the DeepSpeed library. It is designed to be able to train models in the hundreds of billions of parameters or larger.

More Great AIM Stories

Srishti Deoras
Srishti currently works as Associate Editor at Analytics India Magazine. When not covering the analytics news, editing and writing articles, she could be found reading or capturing thoughts into pictures.

Our Upcoming Events

Conference, Virtual
Genpact Analytics Career Day
3rd Sep

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM

The curious case of Google Cloud revenue

Porat had earlier said that Google Cloud was putting in money to make more money, but even with the bucket-loads of money that it was making, profitability was still elusive.

Global Parliaments can do much more with Artificial Intelligence

The world is using AI to enhance the performance of its policymakers. India, too, has launched its own machine learning system NeVA, which at the moment is not fully implemented across the nation. How can we learn and adopt from the advancement in the Parliaments around the world? 

Why IISc wins?

IISc was selected as the world’s top research university, trumping some of the top Ivy League colleges in the QS World University Rankings 2022