A Closer Look at ChatGPT’s Limitations: What You Need to Know

ChatGPT is trained on the internet data and is bound to represent the same bias and inaccuracy that the internet is filled with.
Listen to this story

Since the release of ChatGPT, people have been trying to use it in various sectors like healthcare, education, or legal systems. Though one can find various use cases for the chatbot, the expectations from it are ever-increasing. The trend of banning the chatbot in fields of education is rising and healthcare experts are also questioning if it is ethical to use it.

The chatbot has been trained on GPT-3.5 and is fed with billions of parameters and data. But, as soon as you ask it something recent, the chatbot blurts out, “I’m sorry, but I am a conversational bot trained on large volumes of texts and cannot give information about current events”. 

The lack of connection to the internet is one of the biggest challenges for the chatbot as it cannot fetch new information. This is explicitly told by OpenAI that it is trained on data till 2021 and thus does not have information beyond that and can be occasionally incorrect.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Addressing this issue, various developers have tried making chatbots that can retrieve fresh and live information. One of them is You Chat, which hosts its chat services that summarise information from the internet while also citing the source. ‘You’ is mostly famous for its search engine service, but also provides coding and image generation tools. 

Questionable IQ

Something that OpenAI does not reveal but gets automatically exposed is ChatGPT’s mathematical and analytical capabilities. If you ask it to do simple mathematical operations like addition and subtraction, the results are accurate, just like how Google gives out answers. But, as soon as you add multiple layers to the calculation or a predictive problem, the chatbot boggles. 

https://twitter.com/SarahDRasmussen/status/1609972620761473027

The summarisation aspect of the chatbot—where you feed it papers to bring out the fruit from it—was heavily tested by users. Oftentimes, the software fails to understand the context of the paper and spews out unrelated and incoherent results. If you tell it to write something for a scientific paper, be sure to cross-check it across the board as it can easily make things up.

For example, if you tell ChatGPT to write an article of 300 words about a topic. It might either cross the limit or deliver under it. Though a ballpark of words is fine, if you paste the generated article and ask it to tell the word count, it can give you an imaginary number nowhere close to 300. We tried it below. 

Same is the case with logical reasoning. Understanding context for a chatbot is a difficult metric to achieve, and ChatGPT often fails miserably as well. 

Zero Emotional Quotient

Supposedly, the conversational bot solves your mathematical or scientific problem. But what if you want to interact with it in another language? Not possible. Built on GPT-3—which has data fed to it only in English—ChatGPT is unilingual and does not understand any other language. Moreover, it does not understand the emotional context of the input text and can blabber out text that does not relate to the original query.

While ChatGPT is inaccurate on several accounts, expecting honesty from it is the fault of the user and not the chatbot itself. Moreover, its humour is painfully unfunny and can be biassed as well, being fed on human-data from the internet. Though it can be unintentionally funny because of the things it produces. 

https://twitter.com/haltakov/status/1612928185230061569

But Who to Blame?

ChatGPT is trained on the internet data and is bound to represent the same bias and inaccuracy that the internet is filled with. The ethical reasons about ChatGPT go back to the questions about any other technology that was released in the past. 

Toby Walsh, Professor of AI at the University of New South Wales, commented about the recent banning of ChatGPT in schools: 

“We don’t want to destroy literacy, but did calculators destroy numeracy?” 

There are several limitations that we can point out about ChatGPT, but the important thing to know is that it is still in beta phase and under development. It is built just for natural language processing tasks with a training in static data, and thus fails on other occasions.

Mohit Pandey
Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR