Advertisement

Why is Everyone Bashing ChatGPT?

‘ChatGPT is a glorified-version of Google Search’.
Listen to this story

Unfounded assumptions, bad advice, incorrect information—the biggest source of problems on the internet today is people blindly buying into hype. ChatGPT, which has taken the internet by the storm recently, seems to be making it a lot easier for people to add to the chaos.

By now, ChatGPT has been synonymous with human-like exhaustive responses to questions, including how to draft a contract, to create a code, or even a movie script. The temporarily free chatbot is already changing the way people search for information by answering intricate questions. Sometimes, the understanding capabilities of ChatGPT makes you second-guess if you are actually talking to a human. 

So then, why are people bashing ChatGPT? But, before we do that, let’s look at some of the positive aspects of this revolutionary chatbot. 

Primary among ChatGPT’s unique characteristics is memory. The bot can remember what was said earlier in the conversation and recount it to the user. This itself sets it apart from other competing natural language solutions, which are still solving for memory as they progress on a query-by-query basis.

While it overcomes some of the issues that plagued the past chatbots, which include hateful or racist responses, it is also giving rise to questions about how the user can differentiate between the bot’s content and the human intervention through language. This is because ChatGPT’s text is able to achieve the feel of a truthful response even if it’s not based on facts.

Now, let the bashing begin! 

Answers are inaccurate 

Recently, Stack Overflow, the popular programming forum, banned all answers created by ChatGPT citing a high degree of inaccuracy in the bot’s responses. While it clarified that it was a temporary policy, it did reiterate that the problem not only lies in the inaccuracy of ChatGPTs answers, but deeper in the way the bot phrases its answers.

Because of the nature of LLMs, particularly GPT-3.5—which has been used to build ChatGPT—it can not only generate grammatically correct sentences with a formal tone but it also makes them sound authoritative and forceful. 

Stack Overflow said: “[ChatGPT’s answers] typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting.”

Gives unverified information 

On Twitter, various users, including celebrated AI expert Andrew Ng, posted on the vagueness of responses given to certain specific questions. In some instances, people even posted wrong answers given by the chatbot which could mislead its users.

There were also instances wherein users posted that though the chatbot does not respond to controversial or political questions—it does produce politically incorrect jokes.

Gary Marcus, New York University professor emeritus, has been sharing a host of examples on Twitter of instances of incorrect information by the chatbot wherein it is limited in its ability to leverage facts. 

Author John Warner argues that the chatbot makes up information citing an instance wherein it gave him a list of articles that did not exist.

Assertions backed by fake quotes

When the chatbot is used to write a basic news story on the quarterly earnings of a tech major, the chatbot produces a trustable replication of the company’s financial results, areas where it saw a rise in revenue and areas of potential growth. To make it appear more authentic, it even supplements the article with a quote from the firm’s CEO.

This is because these language models have learned that news stories are always backed with a quote and data. Thus, the chatbot replicates this behaviour too.

No citations and non-existent references

Despite a detailed response which appears credible and mimics a human conversation complete with exhaustive information—ChatGPT fails to reveal or list its sources, raising an alarming question of verified and practical information. 

Further, when it does make such information available, it gives non-existent references. Take the case of this user who said that every one of the references provided by the chatbot in response to his query seeking references that dealt with mathematical properties of lists did not exist!

Brilliantly dumb 

Gary Marcus in his article, ‘The Road to AI We can Trust’, points out that GPT’s knowledge is in part about specific properties of entities and it is not able to completely master abstract relationships.

Take, for instance, if you ask the Chatbot for an article on a specific area (Jayanagar) in Bengaluru, it randomly writes about a resident welfare association in the area which is known for its citizen initiatives. It does not go into the history of the area, its size or other relevant aspects. 

How is it any different from Google Search?

A lot of people today are calling ChatGPT the ‘New Google’ or ‘Google Killer’, which to a certain extent, holds true—particularly in the case of showing bizarre suggestions. Google, for example, suggests cancer as a response for any and every symptom, even when you query-in for a stomach ache. But, in the case of ChatGPT, it doesn’t reflect such suggestions but sugar-coats misinformation. In other words, we can say that ‘ChatGPT is a glorified-version of Google Search’, only much better.  

Download our Mobile App

Aparna Iyer
Aparna Iyer has covered various sectors spanning education, wildlife, culture and law for close to a decade. She now writes on technology and is keen to unearth its capability for public good.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Bangalore

Future Ready | Lead the AI Era Summit

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

20th June | Bangalore

Women in Data Science (WiDS) by Intuit India

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Can Apple Save Meta?

The iPhone kicked off the smartphone revolution and saved countless companies. Could the Pro Reality headset do the same for Meta?