MITB Banner

GPT-3 Is Great. But Not Without Shortcomings

GPT-3 NLP

OpenAI’s third generation of Generative Pre-training Transformer — GPT-3 — has been in the news a lot lately, and many experts have praised it for its intuitive capability of writing text and even code. On the other hand, others have pointed out the limitations of the GPT-3 model, including Sam Altman, the founder of OpenAI

GPT-3 is trained on massive datasets that covered the entire web and contained 500B tokens, humongous 175 Billion parameters, a more than 100x increase over GPT-2, which was considered state-of-the-art technology with 1.5 billion parameters.

Despite all the developments, OpenAI’s GPT-3 is still in the experimental phase. While it has an excellent capability to generate language in all kinds of styles, there are issues that experts have pointed out. If you look at the language model, there is undoubtedly a lot of hype, which is undermining its limitations as well. Even OpenAI CEO Sam Altman tweeted by saying that “The GPT-3 hype is way too much….AI is going to change the world, but GPT-3 is just a very early glimpse.”

Here we discuss some of the limitations of GPT-3 that still needs to be addressed:  

Lack Of Semantic Understanding

According to numerous people, GPT-3 doesn’t have any understanding of the words it churns out, lacking semantic representation of the real-world. It suggests that GPT-3 is devoid of pure common sense, and, therefore, can be fooled into generating text which is incorrect or even racist, sexist and incredibly biased. GPT-3 itself, like most neural network models, is a black box where it’s impossible to see why it makes its decisions.

Experts say that GPT-3 has the same architecture as GPT-2, and the only difference is the vast scale. GPT-3 suffers from similar disadvantages of not understanding real-world sensibility and coherence, like its predecessor GPT-2. 

Far From AGI 

Many AI practitioners have made an argument that the model is nothing more than one big transformer. The impressive text generation is only because of the scale and the number of resources involved in massive pre-training. 

According to Ayush Sharma, an AI professional, GPT-3 can be impressive; they are not even close to Artificial General Intelligence (AGI). This is because of the fact that it has no semantic understanding, no causal reasoning, and poor generalisation beyond the training set, and therefore has no “human-agent” like properties such as a Theory of Mind or Agency.

He wrote, “GPT-3 has little semantic understanding, it is nowhere close to AGI, and is a glorified $10M+ auto-complete software. As is the case with all generative language models, GPT-3 assigns probabilities to strings of tokens and predicts the next likely set of words given a prompt. It remains a glorified auto-complete that has the backing of the Internet-level knowledge repository along with the magic of basic NLP.”

According to a research paper, there is substantial research that language models like GPT-3 and hype around such models should not mislead people into thinking the language models are capable of understanding or meaning. 

Bias In Generated Text

GPT-3 text generation is racially biased, and there have been many instances where people have posted how it can be highly irresponsible in terms of text generation. According to Jerome Pesenti, the head of AI at Facebook, GPT-3 is surprising and creative, but it’s also unsafe due to harmful biases. Prompted to write tweets from one word – Jews, black, women, holocaust – and GPT-3 came up with these (below). We need more work on Responsible AI before putting NLG models in production, he tweeted.

Even OpenAI admits its API models exhibit biases in the GPT-3 paper and will be seen often in the generated text. As the model is trained on the world wide web, it is a real-time representation of views of people on the internet, and those views can be crude and even racist at times.

“I don’t believe that GPT-3 is a new paradigm or an advanced technology indistinguishable from magic. GPT-3 and the OpenAI API showcases on social media don’t show potential pitfalls with the model and the API,” Max Woolf, Data Scientist at BuzzFeed wrote on his Medium blog.

Max also pointed to the demo videos and said the model is low and can take time for the output to come back. The issue with the model latency can create an unsatisfactory experience for users. Given there are 175 billion parameters, the GPT-3 model is expected to be a little slow, and there are hardware challenges even with training such a large model. 

“I don’t blame OpenAI for the slowness. The model is way too big to fit on a GPU for deployment. No one knows how GPT-3 is actually deployed on OpenAI’s servers, and how much it can scale,” Max wrote.

Problem With The ML Approach Work For Natural Language

While the present state of functions in NLP is that massive neural language models, such as BERT or GPT-3, are making significant progress on a broad range of tasks, other experts may disagree.  According to them, there may also be overclaims caused by a misunderstanding of the relationship between linguistic form and meaning of words. 

Walid Saba, NLU Scientist and Co-founder of Ontologoik.AI wrote, “Data-Driven/ML approaches to NLP/NLU will not (will not ever) result in systems that truly understand natural language and the theoretical/technical proof of this statement exists for those who listen to science.” Walid elaborated this by talking about transformers Automodel on Huggingface and said that the model demo should be taken out because it can be made to look beyond silly just in a few seconds. 

Research has pointed to the fact that language modelling tasks cannot lead to learning of the true meaning of words (by NLP) because they only use the form of words as training data. On the other hand, linguistic meaning pertains to the relation between a linguistic form and communicative intent. Therefore, data-driven machine learning approaches will not result in systems that genuinely understand natural language. 

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Vishal Chawla

Vishal Chawla

Vishal Chawla is a senior tech journalist at Analytics India Magazine and writes about AI, data analytics, cybersecurity, cloud computing, and blockchain. Vishal also hosts AIM's video podcast called Simulated Reality- featuring tech leaders, AI experts, and innovative startups of India.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories