OpenAI’s Greg Brockman on AI Safety, Critics Remain Skeptical 

In less than a week, OpenAI’s second statement on AI safety is vague and continues to avoid the “how” part of safety.
Listen to this story

Last week, when OpenAI released an elaborate version of their take on AI Safety, the announcement did not go down too well. It did not address anything concrete, and the surface level announcement was more fluff than real. In an attempt to probably reinforce the announcement, Greg Brockman, President and Co-Founder of OpenAI, yesterday spoke about “safety” and it again offers nothing new. 

Safety and Alignment

Greg Brockman emphasised on the testing and alignment part of GPT models. Reiterating what was already mentioned earlier, he said that GPT-4 was tested over six months before deployment and was built on years of alignment research “in anticipation of models like GPT-4.” 

The company will continue to increase their safety precautions with the goal of releasing each model with the most alignment. GPT-3 was deployed without any special alignment, which was addressed in the subsequent models. GPT-3.5 was “aligned enough” to be deployed in ChatGPT, and GPT-4 claims to perform much better on safety metrics than GPT-3.5.  


In addition to rising concerns on AI development, and countries banning ChatGPT, the recent announcement of the Biden government contemplating possible rules for ChatGPT to check misinformation and false propaganda, has brought the focus on addressing government policies. Italy, the first European country that banned ChatGPT, has now given an opportunity to OpenAI to meet their demands in order to revoke the ban. 

Brockman talks about how “powerful training runs” should be reported to governments and “dangerous capability testing” should be required. Though he complies with governance to be a part of large-scale compute usage, safety standards and regulation, the details of the same should “adapt over time” as technology evolves. 

In the wake of AI experts sharing ominous predictions, Brockman’s statement tries to address the same. To avoid unspotted prediction errors, technology should have “early and frequent contact” with reality as it is iteratively developed, tested, and deployed. By creating a “continuum of incrementally-better AIs”, safety checks are better placed when compared to “infrequent major model upgrades.” 

The announcement ends with a vague statement on how transformative change in AI is a cause for “optimism and concern”. Overall, the announcement sounds like a generic statement in an attempt to sound invested in AI safety as critics have pointed out concerns with the same. Serge Toarca, CEO of has questioned the specifics of how the company looks to achieve alignment in their AI models as the models are tested only on output. With GPT-4 that can be “jailbroken” the problem persists. 

Another user, futurist and author Theo replies to Brockman’s statement as the system can still not protect people with their current guardrails. 

Download our Mobile App

Vandana Nair
As a rare breed of engineering, MBA, and journalism graduate, I bring a unique combination of technical know-how, business acumen, and storytelling skills to the table. My insatiable curiosity for all things startups, businesses, and AI technologies ensure that I'll always bring a fresh and insightful perspective to my reporting.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Bangalore

Future Ready | Lead the AI Era Summit

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

20th June | Bangalore

Women in Data Science (WiDS) by Intuit India

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox