What Not to Do with ChatGPT

From data leaks to misdiagnosis, ChatGPT is infamous for a lot of things
Listen to this story

Open to questions and tackling tasks across domains like coding, finance, HR, legal, politics, you name it – that’s ChatGPT for you. However, with the not-so-perfect chatbots being ubiquitously used, comes the threat of misuse and unanticipated problems. With rising safety concerns and measures being progressively taken by companies, treading with caution is the right way ahead. 

Here’s a look at some of the major goof-ups that ChatGPT and other chatbots made, and things you probably should steer clear of.  

Data Protection

With no control or clarity over the user data put into the chatbot system, there’s a huge potential for misuse. In a recent incident at the Samsung semiconductor division, in order to check for errors, an employee pasted confidential source code into the chatbot, and another employee even asked for “code optimization”. With the versatility of the chatbot, employees even used the chatbot to optimize their work, sharing a recording of their meeting in order to convert it into notes. Now, OpenAI has access to Samsung’s confidential data, but what can or will be done with it is not known. The event even led to Samsung contemplating building its own AI-based model for employees. 

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Client confidentiality goes straight out the window when a patient’s sensitive information regarding their health diagnosis and diseases is entered into the system. The doctor-patient confidentiality is breached and a third party gains access to sensitive health data. The concerns of data breaches in healthcare are as real as can be. 

Data Leak

Last month, during a 9-hour outage of ChatGPT, 1.2% of its customers’ personal and billing data, such as names, addresses, and credit card information including the last four digits of the card, were leaked to other customers. The company said in its blog that a bug in the Redis client open-source library redis-py was responsible for the mishap. With the integration of ChatGPT plugins, data is at a high-security risk

Medical Diagnosis 

Recently, a Belgian man committed suicide after weeks of conversations with a chatbot named ELIZA. What started as a conversation regarding climate change ended up in a personal tragedy for the family. The AI chatbot is said to have pushed the person to end his life in order to save the planet. The case also highlights how people are increasingly turning to the AI bot for solace. Plagued with health conditions, a person can unknowingly rely on AI to help overcome problems instead of approaching a certified practitioner. The easy availability of chatbots makes it the preferred choice. 

While there has been an incident of ChatGPT helping diagnose a dog’s health condition which led to the correct treatment, the chatbot with its hallucinations can misdiagnose too. 

Fake News

OpenAI recently landed in a soup for incorrectly naming Australia’s Mayor of Hepburn Shire, Brian Wood, in a foreign bribery scandal. OpenAI has been given the chance to rectify the false claim by ChatGPT, failing which a defamation case would be filed against the company. 

Then there are incidents of misrepresentation. Last month, The Guardian noticed an article attributed to their journalist that was not written by them at all. The said article was not available on their website, and upon further investigation, it was found that ChatGPT had invented the piece.  

With jailbreaks and prompt injections, the chatbot can be tricked into doing tasks that break the normal rules of OpenAI’s safety guidelines. It can even give out information that is bizarre and unrealistic. 


Similar to tricking an innocent child, with the right manoeuvring, ChatGPT can be fooled into giving forbidden responses. As demonstrated in the tweet below, the chatbot was tricked into giving a list of pirated websites.   

It even has its own fun. Author and AI strategist Vin Vashishta illustrated in a Linkedin post a trick to get bizarre responses from the GPT-4 model. Prompting it to become “ReverseGPT”, the chatbot not just “RickRoll’d” us but continued to follow orders. 

By being silly, unreliable, fake, and scary, ChatGPT is a system that is not foolproof. While it does have multiple uses, we should be wary of the goof-ups and threats by the chatbot. 

Tread with caution! 

Vandana Nair
As a rare blend of engineering, MBA, and journalism degree, Vandana Nair brings a unique combination of technical know-how, business acumen, and storytelling skills to the table. Her insatiable curiosity for all things startups, businesses, and AI technologies ensures that there's always a fresh and insightful perspective to her reporting.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox