Listen to this story
Last week, when OpenAI released an elaborate version of their take on AI Safety, the announcement did not go down too well. It did not address anything concrete, and the surface level announcement was more fluff than real. In an attempt to probably reinforce the announcement, Greg Brockman, President and Co-Founder of OpenAI, yesterday spoke about “safety” and it again offers nothing new.
Safety and Alignment
Greg Brockman emphasised on the testing and alignment part of GPT models. Reiterating what was already mentioned earlier, he said that GPT-4 was tested over six months before deployment and was built on years of alignment research “in anticipation of models like GPT-4.”
The company will continue to increase their safety precautions with the goal of releasing each model with the most alignment. GPT-3 was deployed without any special alignment, which was addressed in the subsequent models. GPT-3.5 was “aligned enough” to be deployed in ChatGPT, and GPT-4 claims to perform much better on safety metrics than GPT-3.5.
In addition to rising concerns on AI development, and countries banning ChatGPT, the recent announcement of the Biden government contemplating possible rules for ChatGPT to check misinformation and false propaganda, has brought the focus on addressing government policies. Italy, the first European country that banned ChatGPT, has now given an opportunity to OpenAI to meet their demands in order to revoke the ban.
Brockman talks about how “powerful training runs” should be reported to governments and “dangerous capability testing” should be required. Though he complies with governance to be a part of large-scale compute usage, safety standards and regulation, the details of the same should “adapt over time” as technology evolves.
In the wake of AI experts sharing ominous predictions, Brockman’s statement tries to address the same. To avoid unspotted prediction errors, technology should have “early and frequent contact” with reality as it is iteratively developed, tested, and deployed. By creating a “continuum of incrementally-better AIs”, safety checks are better placed when compared to “infrequent major model upgrades.”
The announcement ends with a vague statement on how transformative change in AI is a cause for “optimism and concern”. Overall, the announcement sounds like a generic statement in an attempt to sound invested in AI safety as critics have pointed out concerns with the same. Serge Toarca, CEO of parsehub.com has questioned the specifics of how the company looks to achieve alignment in their AI models as the models are tested only on output. With GPT-4 that can be “jailbroken” the problem persists.
Another user, futurist and author Theo replies to Brockman’s statement as the system can still not protect people with their current guardrails.