Advertisement

Freaky ChatGPT Fails That Caught Our Eyes!

And it’s bad at math too!
Listen to this story

Yesterday, we ran a piece listing the coolest things users can do with ChatGPT, a conversational model built on GPT3 API. While it answers in a “human-adjacent” manner, users have identified several flaws.

In 2021, Gary Marcus tweeted, “Let us invent a new breed of AI systems that mix awareness of the past with values that represent the future we aspire to. Our focus should be on building AI that can represent and reason about values, rather than simply perpetuating past data”. And that is lacking even in contemporary AI/ML models.  

The base of chatGPT, GPT-3, is 2.5 years old. The field is progressing every week, yet there are zero mainstream applications (except Copilot). Even today, the models spectacularly fail at from 3-digit multiplication to ASCII art. San Francisco-based OpenAI has been upfront about its defects, including its potential to “produce harmful instructions or biased content”, and is still fine-tuning ChatGPT.

Here are 6 bizarre chatGPT fails that caught our eyes!

The problem of bias

The ethical problems with AI are immense, but perhaps one of the most notable is the problem of bias. Bias in training data is an ongoing challenge in LLMs that researchers have been trying to address. For example, the Twitter trending ChatGPT has reportedly written Python programmes basing a person’s capability on their race, gender, and physical traits—in a manner that’s plainly discriminatory:

Not so logical after all

With a low average IQ, the chatbot does lack logical reasoning. Its ability to understand context is limited. Hence, the model fails to answer questions that any human mostly can. 

Moreover, it lacks common knowledge.

https://twitter.com/neuro_tarun/status/1598357991031705600?s=20&t=D-AuhSUh_wAJbOk3UvY3Sg

Bad at math

ChatGPT should not do math or anything remotely related to math! It fails to explain mathematical theorems and keeps repeating, going in circles. The model can lie to you with as much confidence as it can tell the truth. If you ask for the square root of 423894, it will confidently tell you the wrong answer.

Its moral compass is broken

The model is a moral relativist. ChatGPT’s lack of context could prove dangerously problematic when dealing with sensitive issues, like sexual assault.

Convincing but wrong 

The internet is excited about ChatGPT but the danger is that you can only tell when it’s wrong if you already know the answer. When asked some basic information security questions, the answers sounded plausible but were made up of nonsense.  

This is called “hallucination”, when the system will start spewing nonsense convincingly at any point, and as a user, you’re never sure if any particular detail it outputs is correct. 

https://twitter.com/bltphd/status/1599806815146893313?s=20&t=RXTAEG80wkuNRGUfegrvDg

It’s ‘harmful’ to any other Q&A website’s business model

The prime issue is that while the answers produced by ChatGPT have a high probability of not being correct, they look like they might be good and are very simple to produce, said Stack Overflow in a post

As a result, The company recently imposed a temporary ban as ChatGPT answers are “substantially harmful” both to the site and to users looking for correct solutions.

https://twitter.com/0xabad1dea/status/1599717728981585922?s=20&t=nZ9ASESO81yBDFNatk-16g

Download our Mobile App

Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR