Listen to this story
|
After much speculation, OpenAI has finally announced GPT-4, potentially the next breakthrough in AI, after ChatGPT. According to OpenAI, GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5, the Large Language Model (LLM) behind ChatGPT.
However, researchers are not happy with the technical paper on GPT-4 released by OpenAI. On the very second page of the lengthy report, OpenAI claims, “ this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”
Why Are Researchers Unhappy?
Many researchers and data scientists are left frustrated after reading through the almost 100 pages long GPT-4 paper, which fails to provide the bare minimum of information. OpenAI has also not mentioned the parameter count for GPT-4.
“In scientific papers it is generally expected that these details are mentioned because the papers are supposed to be reproducible,” Data Scientist Dr. Valeriy Manokhin, told AIM.
AI researcher Sebastian Raschka, while sharing his disappointment, said, “We don’t learn anything about the model except “GPT-4 is a Transformer-style model pre-trained to predict the next token in a document, using both publicly available data (such as internet data) and data licensed from third-party providers.”
Mark Tenenholtz, data scientist at Stealth, said that the GPT-4 paper is highly disappointing since there are no discussions about the method, training stabilisation etc.
(Source: Twitter)
Another AI researcher points out, “What knowledge do researchers even gain from this? Nothing. It’s super frustrating.”
Emily M. Bender, Professor of Linguistics at the University of Washington, also points out that there is no mention of GPT-4 carbon footprint as well.
Please @OpenAI change your name ASAP. It's an insult to our intelligence to call yourself "open" and release that kind of "technical report" that contains no technical information whatsoever. https://t.co/WdXAq4a309
— David Picard (@david_picard) March 14, 2023
Is it even a scientific paper?
When a scientific paper gets published, details such as architecture, hardware, training compute, dataset construction, training method etc are always mentioned. It’s an important step because besides reproducibility, it also leads to fair comparison, encourages competition and drives innovation.
Ben Schmidt, VP of Information Design at Nomic AI, says the paper is against the very principles of OpenAI. “The 98 page paper introducing GPT-4 proudly declares that they’re disclosing ‘nothing’ about the contents of their training set.”
The selection of training data can perpetuate historical biases and cause various forms of harm. Schmidt believes, in order to mitigate such harm and make informed judgments about where a model should not be applied, it is crucial to understand the types of biases embedded within the data. However, with OpenAI not disclosing such information, it is now possible.
AI critic Gary Marcus, in his newsletter calls recent developments a step backwards for science. “It sets a new precedent for pretending to be scientific while revealing absolutely nothing,” he said.
“Yesterday, AI became about corporate self interests. A divorce from the broad AI research field that made these companies even possible,” William Falcon, CEO at LightingAI, tweeted.
Pedro Domingos, Professor at University of Washington points out that the GPT-4 does not even mention the names of the authors, instead, “It has a long credits roll at the end, like a Hollywood movie,” he said.
OpenAI also tested GPT-4 on some of the top professional and academic exams in the US designed for humans. OpenAI claims GPT-4 exhibits human-level performance on the majority of these exams.
( Source: GPT-4 paper)
However, Bender believes, “They seem to think this is a point of pride, but it’s actually a scientific failure. No one has established the construct validity of these exams vis-a-vis language models.”
A Corporate Approach
Overtime, OpenAI is behaving more like a corporation, and deviating more and more from the core principles, with which the organisation was founded in 2015.
Elon Musk, who invested in OpenAI in 2015, also pointed out the same thing. “I’m still confused as to how a non-profit to which I donated USD 100 million somehow became a USD 30 billion market cap for-profit,” he tweeted.
OpenAI’s partnership with Microsoft also solidifies the claims that OpenAI is becoming more and more product oriented. Yannic Kilcher, CTO at DeepJudge claims that the GPT-4 paper mentions that the model is safe, but not safe for humanity or against biases, but safe to be used as a product.
In fact, OpenAI, in the paper, has claimed the competitive landscape and the safety implications of large-scale models like GPT-4 as reasons for withholding information about the model.