ChatGPT Has Its Eyes On Your Data

The data that you feed into ChatGPT is saved in OpenAI servers and might be used to their benefit.
Listen to this story

Recently, engineers of Samsung’s Semiconductor group inadvertently leaked critical information while using ChatGPT to quickly correct errors in their source code. In just under a month, there have been three recorded incidents of employees leaking sensitive information via the tool.

In one of the incidents, an employee asked ChatGPT to optimise test sequences for identifying faults in chips. In another case, an employee used the tool to create a presentation from their meeting notes.

Co-incidentally, the leaks were reported just three weeks after Samsung lifted a previous ban on employees using ChatGPT over fears around this very issue. Samsung has now cautioned its employees against using the chatbot given it is obvious that efforts to retrieve the data that has already been collected would be in vain.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Though the chatbot can increase efficiency, in turn saving and optimising numerous resources—there are big risks when it comes to data sharing that have recently come to light. These risks are not just confined to intentional leaks or cyber breaches, they could also stem from employee usage of these tools.

Data fumbled

Recently, a bug leaked information of ChatGPT users including details of their chat history as well as their personal and billing data. On March 20th, during a 9-hour power outage of ChatGPT, it also notified 1.2% of its customers that another customer might have seen their billing information—including first and last names, billing address, credit card type, credit card expiration data and also the last four digits of their credit card. 

An internal investigation done by OpenAI later revealed that a bug in the Redis client open-source library redis-py was responsible for the leak. 

OpenAI’s CEO and co-founder Sam Altman tweeted, “we had a significant issue in ChatGPT due to a bug in an open source library, for which a fix has now been released and we have just finished validating.

“(A) small percentage of users were able to see the titles of other users’ conversation history,” he added.

Even with its premium subscription for ChatGPT Plus, OpenAI said that they would not be storing users’ data anymore for training the model but, for that to happen, the users would have to opt out. In addition, the data would be deleted only after a month. 

So, the data that you feed into ChatGPT is saved in OpenAI servers and might be used to their benefit, “to develop new programs and services” in their own words; or to pass it on further to Microsoft. 

The stream of incidents raise an alarm on the risks that come along with the efficiency that such tools help achieve—-and the glaring question is how to mitigate the potential dangers of using them in a workplace which usually deals with sensitive data.

Regulation or ban?

ChatGPT was hit with a temporary ban in Italy last month on the grounds that the chatbot is not in compliance with EU’s General Data Protection Regulation, which guarantees the ‘The right to be forgotten’. Presently, there is no system in place for individuals to request removal of their data from a machine learning system once it has been used to train the model.

This past week, the Indian government also said that it has evaluated the ethical concerns  related to AI like bias and privacy while taking measures to develop a strong framework for regulations in the AI space but has no plans to introduce laws yet. 

However, OpenAI has in turn put the onus on businesses to address these situations. For instance, Samsung has now chosen to develop its own inhouse AI for internal use by employees while limiting the length of employees’ ChatGPT prompts to a kilobyte or 1024 characters of text. 

Another alternative for companies to steer clear of this conundrum is to use the ChatGPT API instead of the tool. The API is a commercial service so any data that you feed into it cannot be accessed by OpenAI. You can also opt-out from data tracking actively via a form OpenAI has included in their terms of services.

But essentially, other companies have been left with little choice other than coming up with their own policies and guidelines to protect themselves from another data leak.

Shyam Nandan Upadhyay
Shyam is a tech journalist with expertise in policy and politics, and exhibits a fervent interest in scrutinising the convergence of AI and analytics in society. In his leisure time, he indulges in anime binges and mountain hikes.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR