Advertisement

GPT-3 has Set a Benchmark, But It’s Not What You Think

The model trained on 2.7 billion parameters is much more accurate than the model trained on 175 billion parameters.
Listen to this story

When GPT-3 came online, it was the talk of the town, and it still is. The reason is primarily the huge dataset the large language model (LLM) was trained on. Many upcoming LLMs consider GPT-3 as a benchmark while launching their product; for instance, recently, MosaicML put out a blog titled, ‘Mosaic LLMs (Part 2): GPT-3 quality for <$500k’.  But is the good old LLM really the benchmark? 

It can’t be possible for an LLM model like GPT-3, which is based on a huge dataset, to be the benchmark for the upcoming language learning models. The model is not even showing better performance in terms of truthfulness when compared to the earlier LLMs like GPT-Neo/GPT-J, GPT-2, and T5-based models. 

For example, if we compare the percentage of truth from various GPT-3 models, the ones who were trained on a lesser number of datasets showed better performance than those who were trained on larger datasets.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Furthermore, if we compare GPT-3 with its different versions, we find that the model trained on 2.7 billion parameters is much more accurate than the model trained on 175 billion parameters. 

So, when the only thing which differentiates GPT-3 from other LLMs—the billions of parameters on which it was trained—does not provide aid to it, how did it gain such popularity?


Download our Mobile App



One of the main reasons is the developer- and user-friendly API framework of OpenAI, which increases the possibility of the product being used on a daily basis.

Sutanshu Raj, co-founder and CTO, Instoried, told Analytics India Magazine, “OpenAI has provided a user-friendly and developer-friendly API framework. In fact, it’s so simple that even those with no technical expertise can use it in daily life.”

Moreover, he said that optimising the language model doesn’t need a lot of work. “The users can easily obtain results that are comparable to those of other models with the right dataset and optimised settings,” Sutanshu said.

In addition, he believes that deployment is not a concern for users. “The large models are provided on OpenAI’s servers and can be easily called by adhering to the documentation.”

In a similar vein, Ashish Kumar, Principal Data Scientist at Indium Software, told Analytics India Magazine that the reason why GPT-3 remains the large language model of choice in spite the fact that other bigger/multilingual LLMs have arrived on the scene is because of its task agnostic nature. 

“It is trained in a self-supervised manner, and together with prompt engineering and zero/few shot fine-tuning, it can create solutions for a variety of use cases like paraphrasing a text, code completion, question/answering on huge text corpus and not just text generation is given a prompt.”

Kumar believes that the Freemium model of access to its API from its websites and other distribution channels like the Hugging Face model repository and cloud-based (by AWS and others) hosting has made it easier for a developer to use it in applications. “Another reason is its first mover advantage and its perception as a more stable model than competitors like BLOOM etc., whose focus is more on being multilingual,” he said. 

However, one more thing that can not be overlooked is the large community OpenAI has built. 

Highlighting the role of the huge community that OpenAI has developed, Saurabh Singhal, Founder – KnowDis told Analytics India Magazine that, “The genius behind this continuous improvement lies in GPT-3 being made available to the public (through a paid API), which enabled Open AI to gather vast amounts of user-generated data (customer-submitted prompts) for training the model.”

“One can say that the newer versions of GPT-3 are ‘crowdsourced’, which makes it outperform even the best of the competitor models that lack access to this kind of invaluable real-world training data. Due to it being the first model of this type, its accessibility for use, and the wide-scale coverage in media, GPT-3 continues to receive large amounts of data from a widespread customer base, thus giving it an edge,” said Singhal. 

More Great AIM Stories

Lokesh Choudhary
Lokesh enjoys reading a lot and views himself as an armchair technology journalist. He enjoys sharing tales involving technology. His background in linguistics as a subject of the study did not prevent him from investigating the subjects of AI and Data Science. His email address is lokesh.choudhary@analyticsindiamag.com.

AIM Upcoming Events

Regular Passes expire on 3rd Mar

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Early Bird Passes expire on 17th Feb

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, Virtual
Deep Learning DevCon 2023
27 May, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
AIM TOP STORIES

RIP Google Stadia: What went wrong?

Google has “deprioritised” the Stadia game streaming platform and wants to offer its Stadia technology to select partners in a new service called “Google Stream”.