With OpenAI’s recent breakthrough of pre-trained language model GPT-3, the company has revolutionised the concept of machines writing codes like humans — a step towards artificial general intelligence. Not only it is being used for writing codes, but also for writing blogs, creating stories as well as websites and apps. In fact, in recent news, a college student created an entirely fake blog using GPT-3, which created a massive buzz in the ML community and has also been trending on Hacker News.
Such news will bring an understanding that GPT-3, which has been trained on massive 175 billion parameters, has been a milestone for the field of machine learning. Having said that, being a non-profit company, traditionally OpenAI always released its algorithms for companies and developers to use; however, that isn’t the case with GPT-3. In fact, for this astonishing language predictor, the company decided to restrict the access of its code. A lot of this could be attributed to the immense potential GPT-3 can have to be not only beneficial but also dangerous for humanity.
Sign up for your weekly dose of what's up in emerging technology.
OpenAI could already estimate the power of this eye-opening innovation and thus, in an attempt to restrict its dangerous uses, decided to keep the control in their hand by not sharing the backend of the model. This opaqueness, although, provided excellent command of the model to the company, it undoubtedly hampered the effectiveness of the model. Further, it also creates an extreme limitation for companies and developers to replicate the results.
Also Read: Will The Much-Hyped GPT-3 Impact The Coders?
How The Lack of Transparency Can Impact The Performance Of The Model
While the attempt to have less openness about GPT-3 algorithms goes back to the reasoning of safety against dangerous applications, it indeed contradicts the company’s vision as well as scientific ethics of sharing the knowledge. To which, the company stated that not only the algorithm is too complicated for developers to work on but also keeping the control provides a chance for them to monetise their research.
Usually, for a scientific breakthrough to succeed and prove its accuracy, it needed to be repeated over various tasks. Still, the limitation of accessing the code behind GPT-3 restricted further research to enhance the accuracy and effectiveness of the model. The company shared a detailed explanation of the model on their paper —“Language Models are Few-Shot Learners”, but in terms of researching with GPT-3, the developers are confined to OpenAI’s limitations.
In fact, the company has recently morphed its non-profit status by partnering with Microsoft, which allowed them to commercialise the API, opening up investments for the company to carry out the research. The company attributed this change to the millions of dollars required to train models, and thus investments are critical for “actualising the missions.”
Many also believe that by limiting the access of GPT-3 code, the company hugely negates the concept of democratising AI, which is the need of the hour. As a matter of fact, many researchers are advocating making AI accessible to users for better development; however, limiting access to GPT-3 algorithms ultimately puts the knowledge at risk. This, in turn, will put the power into hands of a few larger companies who have the resources to build AI models, making it inaccessible for smaller companies and startups that cannot build or afford it.
Furthermore, while OpenAI is continuously preaching that bigger models can bring better results, GPT comes with many shortcomings, such as lack of semantic understanding, no causal reasoning, poor generalisation beyond the training set, as well as biased with its judgements. This proves what Sam Altman stated — “the hype is way too much,” without actually showing results.
Lastly, with GPT-3 being used for a variety of reasons and for many tasks, it is also argued if OpenAI would be able to keep up with the scale and monitor each use case before giving access to the API. Thus researchers and experts believe that keeping the algorithm not accessible is just another way of solely monetising the developments.
With such pointers in hand, it can easily be established the problems aren’t the shortcomings but the inaccessibility of the code that can reduce the failures. Thus to enhance its efficacy, the company must understand the importance of open-sourcing the model for researchers and developers to replicate its results further.