A Closer Look at ChatGPT’s Limitations: What You Need to Know

Listen to this story

Since the release of ChatGPT, people have been trying to use it in various sectors like healthcare, education, or legal systems. Though one can find various use cases for the chatbot, the expectations from it are ever-increasing. The trend of banning the chatbot in fields of education is rising and healthcare experts are also questioning if it is ethical to use it.

The chatbot has been trained on GPT-3.5 and is fed with billions of parameters and data. But, as soon as you ask it something recent, the chatbot blurts out, “I’m sorry, but I am a conversational bot trained on large volumes of texts and cannot give information about current events”. 

The lack of connection to the internet is one of the biggest challenges for the chatbot as it cannot fetch new information. This is explicitly told by OpenAI that it is trained on data till 2021 and thus does not have information beyond that and can be occasionally incorrect.


Sign up for your weekly dose of what's up in emerging technology.

Addressing this issue, various developers have tried making chatbots that can retrieve fresh and live information. One of them is You Chat, which hosts its chat services that summarise information from the internet while also citing the source. ‘You’ is mostly famous for its search engine service, but also provides coding and image generation tools. 

Questionable IQ

Something that OpenAI does not reveal but gets automatically exposed is ChatGPT’s mathematical and analytical capabilities. If you ask it to do simple mathematical operations like addition and subtraction, the results are accurate, just like how Google gives out answers. But, as soon as you add multiple layers to the calculation or a predictive problem, the chatbot boggles. 

Download our Mobile App

The summarisation aspect of the chatbot—where you feed it papers to bring out the fruit from it—was heavily tested by users. Oftentimes, the software fails to understand the context of the paper and spews out unrelated and incoherent results. If you tell it to write something for a scientific paper, be sure to cross-check it across the board as it can easily make things up.

For example, if you tell ChatGPT to write an article of 300 words about a topic. It might either cross the limit or deliver under it. Though a ballpark of words is fine, if you paste the generated article and ask it to tell the word count, it can give you an imaginary number nowhere close to 300. We tried it below. 

Same is the case with logical reasoning. Understanding context for a chatbot is a difficult metric to achieve, and ChatGPT often fails miserably as well. 

Zero Emotional Quotient

Supposedly, the conversational bot solves your mathematical or scientific problem. But what if you want to interact with it in another language? Not possible. Built on GPT-3—which has data fed to it only in English—ChatGPT is unilingual and does not understand any other language. Moreover, it does not understand the emotional context of the input text and can blabber out text that does not relate to the original query.

While ChatGPT is inaccurate on several accounts, expecting honesty from it is the fault of the user and not the chatbot itself. Moreover, its humour is painfully unfunny and can be biassed as well, being fed on human-data from the internet. Though it can be unintentionally funny because of the things it produces. 

But Who to Blame?

ChatGPT is trained on the internet data and is bound to represent the same bias and inaccuracy that the internet is filled with. The ethical reasons about ChatGPT go back to the questions about any other technology that was released in the past. 

Toby Walsh, Professor of AI at the University of New South Wales, commented about the recent banning of ChatGPT in schools: 

“We don’t want to destroy literacy, but did calculators destroy numeracy?” 

There are several limitations that we can point out about ChatGPT, but the important thing to know is that it is still in beta phase and under development. It is built just for natural language processing tasks with a training in static data, and thus fails on other occasions.

Support independent technology journalism

Get exclusive, premium content, ads-free experience & more

Rs. 299/month

Subscribe now for a 7-day free trial

More Great AIM Stories

Mohit Pandey
Mohit is a technology journalist who dives deep into the Artificial Intelligence and Machine Learning world to bring out information in simple and explainable words for the readers. He also holds a keen interest in photography, filmmaking, and the gaming industry.

AIM Upcoming Events

Early Bird Passes expire on 3rd Feb

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

All you need to know about Graph Embeddings

Embeddings can be the subgroups of a group, similarly, in graph theory embedding of a graph can be considered as a representation of a graph on a surface, where points of that surface are made up of vertices and arcs are made up of edges