Advertisement

GPT-4: Beyond Magical Mystery

The OpenAI CEO believes that by ingesting human knowledge, the model is acquiring a form of reasoning capability that could be additive to human wisdom in some senses.
Listen to this story

“We’ll look back at it and say it was a very early AI and it’s slow, it’s buggy, it doesn’t do a lot of things very well,” said Sam Altman, CEO of OpenAI, while on a podcast with Lex Fridman recently.  “But,” continued Altman, “neither did the very earliest computers and they still pointed a path to something that was going to be really important in our lives, even though it took a few decades to evolve”. 


He was talking about ‘GPT-4’, the latest offering from OpenAI in the lineup of LLM models, which has left many amazed with what it can do, particularly with the latest update regarding the plugins on the platform–making it the iOS Appstore moment in AI

However, when Fridman prompted what would be the key moment marked in history a couple of years later—when it comes to the mass adoption of AI—Altman was of the opinion that ChatGPT will be the one. 

“It’s not like we could say here was the moment where AI went from not happening to it being a thing,” said Altman, “If I had to pick some moment from what we’ve seen so far I’d sort of pick chatGPT”. 

According to Altman, what made ChatGPT popular for the masses was the ease of usability, RLHF and the swift interface. 

Why is RLHF so important?

Fridman said that reinforcement learning with human feedback (RLHF) is the secret ingredient that elevates the performance of machine learning models. 

Taking the conversation further, Fridman said that while the models such as ChatGPT can be trained on vast amounts of text data, they often lack practical applicability when tested. “Though successful in evaluations and tests, the base model is not very useful in practice,” said Altman. 

“However, RLHF,” continued Altman, “which involves incorporating human feedback into the model’s training, can significantly improve its usability.” 

He further explained that the simplest form of RLHF involves presenting two outputs to a human and asking which one is better, and then using that feedback to train the model using reinforcement learning techniques. This process allows the model to learn from human preferences and adapt its outputs accordingly. “In my opinion, RLHF is a highly effective method for enhancing the performance of machine learning models,” said Altman.

Essentially, as Fridman later pointed out, a large language model that’s trained on a large dataset that creates a wisdom which is contained within the internet becomes much more impressive after adding a certain degree of human guidance on top of it. 

By incorporating human guidance into the training process, the model can better understand human preferences, making it more efficient at providing accurate and relevant outputs. “The feeling of alignment between the user and the model is crucial in making the model more usable and effective,” said Altman. 

GPT-4: Human Wisdom 

The conversation took a different turn when Fridman asked him whether there is a growing understanding within OpenAI about the nature of the “something” that makes GPT models so powerful or is it still a kind of beautiful, magical mystery? 

Altman believes that there are many different evaluation metrics that can be used to measure the performance of a model, both during and after training. “However,” said Altman, “the most important metric is how useful and impactful the model’s outputs are for people”. 

“This includes the value and utility it provides as well as the delight it brings, and the ways it can help create a better world through new science, products, and services,” added Altman. 

Additionally, he mentioned that while the researchers are gaining a better understanding of how GPT models work for specific inputs, there is still much they don’t fully understand. “We can’t always explain why the model makes certain decisions over others, although we are making progress in this area,” explained Altman. 

“For example, creating GPT-4 required a lot of understanding but we may never fully comprehend the vast amount of data that it compresses into a small number of parameters,” said Altman. 

According to him, GPT-4 can be considered a repository of human knowledge. Moreover, Altman believes that the exciting aspect of GPT models is their ability to reason, to some extent. “While there may be disagreements about what constitutes reasoning, many users of the system acknowledge that it is doing something in this direction, and that’s remarkable,” he remarks. 

The OpenAI CEO believes that by ingesting human knowledge, the model is acquiring a form of reasoning capability that could be additive to human wisdom in some senses. He also said that it can be used for tasks that don’t require any wisdom at all. 

“In the context of interactions with humans, GPT models can appear to possess wisdom, especially when dealing with multiple problems and continuous interactions,” said Altman.  

Download our Mobile App

Lokesh Choudhary
Tech-savvy storyteller with a knack for uncovering AI's hidden gems and dodging its potential pitfalls. 'Navigating the world of tech', one story at a time. You can reach me at: lokesh.choudhary@analyticsindiamag.com.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Is Sam Altman a Hypocrite? 

While on the one hand, Altman is advocating for the international community to build strong AI regulations, he is also worried when someone finally decides to regulate it