OpenAI Inches Closer to AGI, Reduces Hallucinations

OpenAI’s new process supervision training is said to improve math reasoning with human-like thinking, and reduce hallucinations. Is this a step closer to AGI?
Listen to this story

A Math teacher’s keen interest in checking the steps to solve a problem rather than the result loosely forms the basis for OpenAI’s new training approach. The company announced a new technique on training the model through process supervision by rewarding each step of correct reasoning as opposed to rewarding the correct final result via outcome supervision. 

Here the output would probably be a model with reduced hallucinations and higher alignment as claimed by the company. OpenAI specifically calls out mitigating hallucinations as a crucial step towards ‘building aligned AGI’, but would any of these new training methods inch them closer to an AGI status? 

Source: Twitter

Hallucinations At Bay

OpenAI talks about how you can train models to detect hallucinations either by process supervision, a method to provide feedback for each individual step, or outcome supervision, where the feedback is based on a final result. The company claims to have improved mathematical reasoning with the former method. By rewarding the model at each correct step, the model is said to mimic ‘human reasoning’ while solving a mathematical problem. 

With an emphasis on hallucinations, the company’s move towards ‘claiming’ to make the models more robust continues. Companies are actively working on reducing hallucinations. Recently, NVIDIA released NeMo Guardrails, an open-source toolkit that will help LLM-based applications become accurate, appropriate, and secure. With hallucinations considered a persisting problem with chatbots that often make them behave illogically by generating misinformation or biases, OpenAI is working on making their models become better. 

With the new training method, the company is hoping to keep a check on hallucinations as they believe a process-oriented method that involves feedback at each step, will control the irrational outputs generated by the chatbots. 

Alignment — Closer to AGI? 

OpenAI’s reference to ‘building an aligned AGI’ is hinting at the company’s long-term plans for achieving it. Looking back, Sam Altman has made multiple mentions of AGI and how the future will look with it. A few months ago, he laid out an elaborate AGI roadmap for OpenAI where its dangers were called out. The company believed that AGI can be misused and lead to grave consequences in society. However, despite these risks, the potential and benefits of it are far-reaching therefore, the company will develop it in a ‘responsible way’. AI expert Gary Marcus predicts that AGI will not be coming soon. 

It is interesting to note that Altman’s stance on AGI and its development is not clear-cut. In yesterday’s tweet, Altman seemingly downplayed the risk of AGI by predicting how ‘a much faster rate of change’ is what AGI will bring in. He believes that with AGI the future will unfold similarly to that without it, and the difference will be the speed with which things unfold – “everything happens much faster”. 

Ironically, Sam Altman along with AI scientists, Geoffrey Hinton, Yoshua Bengio, and many others, signed a statement a few days ago, which stands for safeguarding against the threat of extinction posed by AI, and considers it on par with nuclear war. If any action is to be taken on this, then the question that would arise is: how far will OpenAI go to make more advanced models reach AGI? 

The recent statement is in continuation with the open letter that was signed by over 31k people two months ago, including Elon Musk, Gary Marcus and other tech experts, who were urging for a pause on advanced AI models, which, interestingly, was not signed by Sam Altman. Though Altman had confirmed a month ago that the company will not work on building their next superior model GPT-5, and instead focus on the safety features of their existing models, his constant sway in matters pertaining to AGI threats and downplaying its scope makes it difficult to gauge where the company is headed. 


The company, often criticised for data security threats and privacy concerns, is fighting hard to prove ChatGPT as a foolproof chatbot. The company is now working on democratising AI by offering grants to those who can propose the best method for creating an AI regulatory framework — again with the hope of improving the system and seeming compliant with the world.

Download our Mobile App

Vandana Nair
As a rare breed of engineering, MBA, and journalism graduate, I bring a unique combination of technical know-how, business acumen, and storytelling skills to the table. My insatiable curiosity for all things startups, businesses, and AI technologies ensure that I'll always bring a fresh and insightful perspective to my reporting.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR