Listen to this story
|
In an event held at MIT last evening, Sam Altman confirmed to Lex Friedman that OpenAI is not working on GPT-5. He also addressed the letter on 6-months ban on AI research and spoke about the vision the company had regarding safety.
Altman via a video interview talks about how an earlier version of the letter talks about OpenAI training GPT-5, for which he confirms that they are not working on the next version and won’t for “some time.” He said that the company is still prioritizing safety issues for the current model of GPT-4 which are important and were “totally left out.”
Not In Sync with Open Letter
Altman also emphasizes that the letter is missing the most technical nuances about “where we need to pause.” Probably, in the wake of rising AI concerns, he reiterated the company’s outlook towards AI safety. He believes that moving with caution and “increasing rigor” for safety issues is really important.
Sam Altman also believes that technology is going to impact all of us and that engaging everyone in the discussion is important. Putting the system out to the world which is “deeply imperfect” for people to experience and think about the upsides and downsides is “worth the trade-off”. Though they intend to embarrass themselves in public and change their minds and data frequently, the company will continue to do what they are doing as a big part of their goal is to “get the world to engage with this” and eventually understand what is the future that we all want.
Since last week, this is the third statement/announcement released by OpenAI regarding safety of their models. A week ago, the company released an elaborate blog on their AI safety measures which touched upon the surface without addressing deep issues. In continuation to the announcement, yesterday Greg Brockman released a statement on how the company is focussing on the safety and regulations part without again not addressing the “how” part of things.
Gary Marcus was quick to react to the interview by stating the ban letter in fact “encouraged” some AI research. While others were quick to question the truth behind GPT-5’s actual status.