Advertisement

You Are to be Blamed for ChatGPT’s Flaws

It is not just ChatGPT’s fault that it generates misinformation, it’s also, and mostly the fault of content creators and media firms
In ChatGPT, We Trust
Listen to this story

Should we trust ChatGPT? Bard and ChatGPT claim to give precise answers to all the world’s questions until hallucination kicks in. They might be a big source of information for many, but at the same time, they are criticised over biases and misinformation. Regulatory authorities and governments all over the world have been breaking a sweat over controlling the misinformation being spread through these platforms. 

During the US Senate hearing on AI oversight, Open AI CEO Sam Altman said, “This [ChatGPT] is a tool that can generate content more efficiently than ever before. Here, the user can test the accuracy, change it if they don’t like it and get another version. But the content generated still spreads through social media, texts, or other similar ways.” 

He explained how the interaction with ChatGPT is a single-player experience where the user is interacting with just a tool generating content, and not sharing it without their consent.

Altman, too, expressed concerns about the technology and called for regulations around it. But when asked about the impact a technology like this can have on elections or spreading misinformation, he agreed with the premise, but argued for a different lens to frame regulations around this, not like social media.

AI takes the fall for bad actors

Speaking of misinformation, the chatbot can sometimes be forced to generate false information by the user with certain prompts. Even in cases like these, ChatGPT often refuses to generate certain stuff that can be hateful or false. So, the blame appears to be more on the side of the user, than ChatGPT.


If someone wants to create a fake ad or some false information about elections, they can do that regardless of ChatGPT or similar technology. Even before such AI models, the internet was filled with misinformation splattered on Twitter, which spread like wildfire – about COVID vaccines or presidential elections. The recent coverage of the fake photograph of an explosion at the Pentagon was another example of how people are vulnerable to misinformation on social media. 

Not just with AI, similar photographs can be made using Photoshop. The person who believes the photograph is true and spreads it on social media, or news platforms, is the one responsible to verify its origin. 

We are not saying that generating fake information is fine, but instead of vilifying the tool used for creating it, punish the user who makes and spreads it. And not just the creator of the content, it is also the responsibility of the person who consumes and shares it on social media platforms to verify the content. In certain cases, the latter should be held more responsible for the spreading of misinformation, than the former for creating it for it only becomes news when it gets a platform. 

Blame the data

“As an AI language model, I cannot…,” is one phrase that you would get for a lot of prompts you input on ChatGPT. Even ChatGPT realises that it is incapable of generating a lot of things, a lot of which can be misinformation. Or even if it does, it declares that it “may not be true”. OpenAI puts a clear disclaimer on its website – ChatGPT may produce inaccurate information about people, places, or facts. Does any publishing website put up such disclaimers? The company has recently provided even more control of users’ data to protect privacy.

There have been cases where ChatGPT made false accusations on people, and is now facing lawsuits. In certain instances, it is also known for making up anonymous sources and making itself sound profound while doing so. 


You might also know a lot of people who do so. And anyway, it is trained on the information freely available on the internet. The same internet, which is filled with conspiracy theories, false information, and hateful language. OpenAI has done a great job of fine-tuning it to not spew garbage and be civil.

The information that it is fed is hundreds and thousands of websites, including a lot of them that are filled with confidently written lies and manipulated facts. No one can claim that all information on the internet is true and trustable. Would be interesting to see Elon Musk’s chatbot fed on Twitter data.

Same is the case for ChatGPT – you can choose to trust it and spread misinformation, even if OpenAI tells you not to. 

Internet, ChatGPT Caught in Cycle

In the end, as Yann LeCun put it, it is just a text generator. A person can write the same misinformation on social media without using AI. As Altman said at the Senate that OpenAI already has placed models that can detect if text is generated from ChatGPT or not, the future looks safe for people concerned with AI spreading misinformation. 

On a concerning note, the internet is getting filled with a lot of content generated by ChatGPT and similar models. A lot of it is evident because people post content without even verifying or editing them. “As an AI language model” phrase is found on several websites that are polluting the internet

It’s a full cycle. ChatGPT is fed on internet data and now the internet is filled with articles written by AI. Now, with Google Bard being connected to the internet in real-time, and ChatGPT being connected to Bing for real-time internet access, the case might get worse — models being trained on the same data that they generated

This begs the question – Is the recent development of connecting chatbots to the internet really a good idea? When ChatGPT had a knowledge cut off of 2021, people initially trusted it, but with time somewhat the trust fell off. 

Now that Google and Microsoft are claiming that the models would improve with real-time internet access, users would start believing it as a source of information. This can possibly spread a lot of misinformation directly to the users – combination of real-time information with hallucinating LLM chatbots.

Even then, the one to be blamed is the internet and its data, not ChatGPT or Bard. It’s the data generated by humans that is polluting ChatGPT responses.

Download our Mobile App

Mohit Pandey
Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Is Sam Altman a Hypocrite? 

While on the one hand, Altman is advocating for the international community to build strong AI regulations, he is also worried when someone finally decides to regulate it