MITB Banner

What Not to Do with ChatGPT

From data leaks to misdiagnosis, ChatGPT is infamous for a lot of things

Share

Listen to this story

Open to questions and tackling tasks across domains like coding, finance, HR, legal, politics, you name it – that’s ChatGPT for you. However, with the not-so-perfect chatbots being ubiquitously used, comes the threat of misuse and unanticipated problems. With rising safety concerns and measures being progressively taken by companies, treading with caution is the right way ahead. 

Here’s a look at some of the major goof-ups that ChatGPT and other chatbots made, and things you probably should steer clear of.  

Data Protection

With no control or clarity over the user data put into the chatbot system, there’s a huge potential for misuse. In a recent incident at the Samsung semiconductor division, in order to check for errors, an employee pasted confidential source code into the chatbot, and another employee even asked for “code optimization”. With the versatility of the chatbot, employees even used the chatbot to optimize their work, sharing a recording of their meeting in order to convert it into notes. Now, OpenAI has access to Samsung’s confidential data, but what can or will be done with it is not known. The event even led to Samsung contemplating building its own AI-based model for employees. 

Client confidentiality goes straight out the window when a patient’s sensitive information regarding their health diagnosis and diseases is entered into the system. The doctor-patient confidentiality is breached and a third party gains access to sensitive health data. The concerns of data breaches in healthcare are as real as can be. 

Data Leak

Last month, during a 9-hour outage of ChatGPT, 1.2% of its customers’ personal and billing data, such as names, addresses, and credit card information including the last four digits of the card, were leaked to other customers. The company said in its blog that a bug in the Redis client open-source library redis-py was responsible for the mishap. With the integration of ChatGPT plugins, data is at a high-security risk

Medical Diagnosis 

Recently, a Belgian man committed suicide after weeks of conversations with a chatbot named ELIZA. What started as a conversation regarding climate change ended up in a personal tragedy for the family. The AI chatbot is said to have pushed the person to end his life in order to save the planet. The case also highlights how people are increasingly turning to the AI bot for solace. Plagued with health conditions, a person can unknowingly rely on AI to help overcome problems instead of approaching a certified practitioner. The easy availability of chatbots makes it the preferred choice. 

While there has been an incident of ChatGPT helping diagnose a dog’s health condition which led to the correct treatment, the chatbot with its hallucinations can misdiagnose too. 

Fake News

OpenAI recently landed in a soup for incorrectly naming Australia’s Mayor of Hepburn Shire, Brian Wood, in a foreign bribery scandal. OpenAI has been given the chance to rectify the false claim by ChatGPT, failing which a defamation case would be filed against the company. 

Then there are incidents of misrepresentation. Last month, The Guardian noticed an article attributed to their journalist that was not written by them at all. The said article was not available on their website, and upon further investigation, it was found that ChatGPT had invented the piece.  

With jailbreaks and prompt injections, the chatbot can be tricked into doing tasks that break the normal rules of OpenAI’s safety guidelines. It can even give out information that is bizarre and unrealistic. 

Scatterbrain

Similar to tricking an innocent child, with the right manoeuvring, ChatGPT can be fooled into giving forbidden responses. As demonstrated in the tweet below, the chatbot was tricked into giving a list of pirated websites.   

It even has its own fun. Author and AI strategist Vin Vashishta illustrated in a Linkedin post a trick to get bizarre responses from the GPT-4 model. Prompting it to become “ReverseGPT”, the chatbot not just “RickRoll’d” us but continued to follow orders. 

By being silly, unreliable, fake, and scary, ChatGPT is a system that is not foolproof. While it does have multiple uses, we should be wary of the goof-ups and threats by the chatbot. 

Tread with caution! 

Share
Picture of Vandana Nair

Vandana Nair

As a rare blend of engineering, MBA, and journalism degree, Vandana Nair brings a unique combination of technical know-how, business acumen, and storytelling skills to the table. Her insatiable curiosity for all things startups, businesses, and AI technologies ensures that there's always a fresh and insightful perspective to her reporting.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.