MITB Banner

OpenAI Opens Up Against AI Safety

OpenAI said that it waited over six months to deploy GPT-4 to better understand its capabilities, benefits, and risks.
OpenAI’s Answer to the ‘Open Letter to Pause Giant AI Experiments’
Listen to this story

Amid the rising concerns around the use of OpenAI’s GPT-4, OpenAI finally expressed its approach to AI safety, where it looks to build and deploy safe AI systems

In its blog post, the company said that a practical approach to solving AI safety concerns is to dedicate more time and resources to researching effective mitigations and alignment techniques and testing them against real-world abuse

Further, it said that improving AI safety and capabilities should go hand in hand. OpenAI believes that its best safety work to date has come from working with its models as they are better at following users’ instructions and easier to steer or ‘guide.’ Also, the company said that it will be increasingly cautious with the creation and deployment of more capable models, and will continue to enhance safety precautions as its AI systems evolve. 

The latest development comes in the backdrop of over 11,000+ people signing an open letter to pause giant AI experiments for six months, particularly training of models that are more powerful than GPT-4. Also, many countries are banning ChatGPT. Recently, Italy banned ChatGPT over privacy concerns, and many countries follow, including Spain and others. 

OpenAI said that it waited over six months to deploy GPT-4 to better understand its capabilities, benefits, and risks. It believes that it is sometimes necessary to take longer than that to improve the AI system’s safety. It also said that policymakers and AI providers will need to ensure that AI development and deployment are governed effectively at a global scale so that no one cuts corners to get ahead. “This is a daunting challenge requiring both technical and institutional innovation, but it is one that we are eager to contribute to,” said OpenAI. 

OpenAI looks to take a collaborative approach, alongside striking open dialogue among stakeholders to create a safe AI ecosystem. It believes that it requires extensive debate, experimentation, and engagement, including the bounds of AI system behaviour. 

OpenAI Flaws

OpenAI said that there is a limit to what they can learn in a lab, and are working hard to prevent foreseeable risks before deployment. The company said that it cannot predict all of the beneficial ways people are using its technology, nor all the ways people will abuse it. 

“That is why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time,” said OpenAI, saying that it cautiously and gradually releases new AI systems to a steadily broadening group of people, alongside closely monitoring API partners. 

OpenAI also claimed that it does not permit its technology to generate hateful, violent, harassing, or adult content, among other areas. The company said its GPT-4 is 82% less likely to respond to requests for disallowed content compared to GPT-3.5. 

It also said that it has established a robust system to monitor for abuse. Citing child safety abuse, the company said when users try to upload obscure content to its image tools (DALL.E 2), it blocks and reports it to the National Center for Missing and Exploited Children. 

Privacy Concerns 

A lot of users and companies are worried about the privacy concerns associated with the use of ChatGPT. Recently, Samsung workers have unwittingly leaked top-secret data whilst using the platform to help them with tasks. There is a lot of question about what OpenAI does with data. 

OpenAI claimed that it does not use data for selling its services, advertising, or building profiles of people. “We use data to make our models more helpful for people,” it added, citing ChatGPT on how it improves by further training on the conversations people have with it.  

Further, the company said that it wants its models to learn about the world, not private individuals. It also said that it is working towards removing personal information from the training dataset where feasible, alongside fine-tuning models to reject requests for the personal information of private individuals and respond to requests from individuals to delete their personal information from its systems. 

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Shyam Nandan Upadhyay

Shyam Nandan Upadhyay

Shyam is a tech journalist with expertise in policy and politics, and exhibits a fervent interest in scrutinising the convergence of AI and analytics in society. In his leisure time, he indulges in anime binges and mountain hikes.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories