When startup OpenAI, based in San Francisco, released GPT-3, the whole research community stood up and took notice. And why not. A gigantic neural network in its own right, this computer program was touted as the next big thing — a revelation in the deep learning segment, which could write codes like humans and even authored blogs, stories, websites, and create apps.
The novelty of this product laid on the fact that it had 175 billion parameters, whereas its predecessor GPT-2 had just 1.5 billion parameters, considered the largest models at the time of its release. It may also be noted that the model is at least ten times larger than Turing NLG that ranks just behind it.
However, since the time of its release in June this year, along with appreciation, this model has also received a few brickbats along the way.
The Development & Testing
GPT-3 is an acronym for generative pre-training, version 3. Unlike other neural networks that churn out numeric scores or a yes and no answer, GPT models generate long sequences of original texts as output. Although these models are not built on any domain knowledge, they do have the capability to complete domain-specific tasks such as text translations. A language model, as in the case of GPT-3, it calculates the probability of a word appearing in a text for a given set of other words in it, also called the conditional probability of words.
OpenAI’s team used three evaluating techniques to measure the performance of GPT-3 in the testing stage — few-shot learning, one-shot learning, and zero-shot learning. It was found that while it achieved promising and 100% results in the zero- and one-shot settings, in case of the few-shot settings, it only occasionally surpassed the state-of-the-art models. It also showed promising results with tasks such as translation, question-answers, puzzle word, cloze tasks, and arithmetic calculations on three-digit numbers.
The team, while listing its capabilities, also acquainted with this model’s flaws and drawbacks. These drawbacks paradoxically erupted from its rather superior ability to generate texts, almost indistinguishable from those produced by an actual human.
The researchers, in the release of GPT-3, cautioned that “GPT-3 has the potential to advance both the beneficial and harmful applications of language models.”
Getting into more details, the researchers recorded that the high-quality text generating capability that makes it difficult to disguise between a program written and human-written text could mean that mischievous players could misuse this model. They also admitted that while there is potential for misuse, it was still unclear to what extent, since it can be repurposed in different environments and for different purposes other than what intended and anticipated. They said the possible misuses could be — spam and phishing, fraudulent academic essay writing, abuse of legal processes, and social engineering pretexting.
Further, it is also determined by the research team that since the model bases its writing capabilities from resources scrapped from the internet, there is an inherent risk of bias, such as racial and other sentiments.
Significant Instances Of GPT-3 Use
Since its release there have been multiple instances where GPT-3 was used so discreetly that it took a while for people to even realise:
- A bot powered by GPT-3 was found to be interacting with people in a Reddit thread. This bot was using the username “/u/thegentlemetre,” to interact with people and post comments to /r/AskReddit questions within seconds. The bot masked itself as a human Redditor and created several comments before it was actually spotted by another human Redditor Philip Winston. Apparently, as per Winston, the text generator by the GPT-3 bot matches the output of Philosopher AI, a tool powered by GPT-3, which answers questions on philosophy.
- Liam Porr, a computer science student at the University of California, Berkeley created a fake blog under a fake name using this AI model. A few topics that it wrote were – “Feeling unproductive? Maybe you should stop overthinking” and “Boldness and creativity trump intelligence”, among others. Porr submitted an application after filling a simple questionnaire about the intended use and collaborated with another student to run a script. This gave GPT-3 model the headline and an introduction from which it churned several completed versions. Only a few visitors to the blog actually realised that it was an AI-generated blog, while a large number remained ignorant until revealed.
- “I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!” — This was the opening paragraph of The Guardian article, written entirely by a GPT-3 bot. It was considered groundbreaking in many aspects. The input provided to the bot was it is supposed to write a 500-word article on why humans have nothing to fear from AI.
- Other users such as ML enthusiast Mario Klingemann and Manuel Araoz, the Founder and Advisor of OpenZeppelin too have attempted GPT-3 authored blog posts and articles.
As already mentioned, GPT-3 in no way has been immune to criticism. Apart from the ones already admitted by the OpenAI’s team itself, it has also been called out on other fronts.
One of the major ones was when the OpenAI’s team decided to hand over the exclusive access of this model to Microsoft. It means that the access to its underlying codes and mechanisms lie exclusively with Microsoft that would allow the company to leverage state-of-art technical innovations of OpenAI to further promote its own products. This move came under criticism from big wigs such as Elon Musk, one of OpenAI’s founders himself. Additionally, Karen Hao of MIT Technology Review has also scrutinised the move and wrote — while the non-profit “was supposed to benefit humanity,” it is currently helping only the tech giant.
Further, recently, Yann LeCun, the VP & Chief AI Scientist at Facebook, trashed the GPT-3 model. He called out people’s unrealistic expectations from this software. According to him ‘is entertaining, and perhaps mildly helpful as a creative tool’.
Pathbreaking innovation or just an overhyped tool which is toying with the momentary fancy of people, or both at the same time. It is difficult to box the potential of GPT-3 in one category. As of now, despite all the scrutiny, GPT-3 still remains flavour of the industry, with more organisations buying the technology for staggering amounts, the latest example being organisation Otherside AI for email writing. Further, it will be interesting to see if it will last the test of time or will it be nudged off the cliff by another superior tool in the near future remains to be seen.