Why is Everyone Bashing ChatGPT?

‘ChatGPT is a glorified-version of Google Search’.

Share

Unfounded assumptions, bad advice, incorrect information—the biggest source of problems on the internet today is people blindly buying into hype. ChatGPT, which has taken the internet by the storm recently, seems to be making it a lot easier for people to add to the chaos.

By now, ChatGPT has been synonymous with human-like exhaustive responses to questions, including how to draft a contract, to create a code, or even a movie script. The temporarily free chatbot is already changing the way people search for information by answering intricate questions. Sometimes, the understanding capabilities of ChatGPT makes you second-guess if you are actually talking to a human. 

So then, why are people bashing ChatGPT? But, before we do that, let’s look at some of the positive aspects of this revolutionary chatbot. 

Primary among ChatGPT’s unique characteristics is memory. The bot can remember what was said earlier in the conversation and recount it to the user. This itself sets it apart from other competing natural language solutions, which are still solving for memory as they progress on a query-by-query basis.

While it overcomes some of the issues that plagued the past chatbots, which include hateful or racist responses, it is also giving rise to questions about how the user can differentiate between the bot’s content and the human intervention through language. This is because ChatGPT’s text is able to achieve the feel of a truthful response even if it’s not based on facts.

Now, let the bashing begin! 

Answers are inaccurate 

Recently, Stack Overflow, the popular programming forum, banned all answers created by ChatGPT citing a high degree of inaccuracy in the bot’s responses. While it clarified that it was a temporary policy, it did reiterate that the problem not only lies in the inaccuracy of ChatGPTs answers, but deeper in the way the bot phrases its answers.

Because of the nature of LLMs, particularly GPT-3.5—which has been used to build ChatGPT—it can not only generate grammatically correct sentences with a formal tone but it also makes them sound authoritative and forceful. 

Stack Overflow said: “[ChatGPT’s answers] typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting.”

Gives unverified information 

On Twitter, various users, including celebrated AI expert Andrew Ng, posted on the vagueness of responses given to certain specific questions. In some instances, people even posted wrong answers given by the chatbot which could mislead its users.

There were also instances wherein users posted that though the chatbot does not respond to controversial or political questions—it does produce politically incorrect jokes.

Gary Marcus, New York University professor emeritus, has been sharing a host of examples on Twitter of instances of incorrect information by the chatbot wherein it is limited in its ability to leverage facts. 

Author John Warner argues that the chatbot makes up information citing an instance wherein it gave him a list of articles that did not exist.

Assertions backed by fake quotes

When the chatbot is used to write a basic news story on the quarterly earnings of a tech major, the chatbot produces a trustable replication of the company’s financial results, areas where it saw a rise in revenue and areas of potential growth. To make it appear more authentic, it even supplements the article with a quote from the firm’s CEO.

This is because these language models have learned that news stories are always backed with a quote and data. Thus, the chatbot replicates this behaviour too.

No citations and non-existent references

Despite a detailed response which appears credible and mimics a human conversation complete with exhaustive information—ChatGPT fails to reveal or list its sources, raising an alarming question of verified and practical information. 

Further, when it does make such information available, it gives non-existent references. Take the case of this user who said that every one of the references provided by the chatbot in response to his query seeking references that dealt with mathematical properties of lists did not exist!

Brilliantly dumb 

Gary Marcus in his article, ‘The Road to AI We can Trust’, points out that GPT’s knowledge is in part about specific properties of entities and it is not able to completely master abstract relationships.

Take, for instance, if you ask the Chatbot for an article on a specific area (Jayanagar) in Bengaluru, it randomly writes about a resident welfare association in the area which is known for its citizen initiatives. It does not go into the history of the area, its size or other relevant aspects. 

How is it any different from Google Search?

A lot of people today are calling ChatGPT the ‘New Google’ or ‘Google Killer’, which to a certain extent, holds true—particularly in the case of showing bizarre suggestions. Google, for example, suggests cancer as a response for any and every symptom, even when you query-in for a stomach ache. But, in the case of ChatGPT, it doesn’t reflect such suggestions but sugar-coats misinformation. In other words, we can say that ‘ChatGPT is a glorified-version of Google Search’, only much better.  

Share
Picture of Aparna Iyer

Aparna Iyer

Aparna Iyer has covered various sectors spanning education, wildlife, culture and law for close to a decade. She now writes on technology and is keen to unearth its capability for public good.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India