“GPT-3 has the potential to advance both the beneficial and harmful applications of language models.” — OpenAI Researchers
The launch of OpenAI’s advanced AI language model, the third generation of Generative Pre-training Transformer — GPT-3 — has been one of the main highlights of the industry. The model has not only been the largest of all, being trained on 175 billion parameters but also showcased the impressive capability to outrank state-of-the-art models for text prediction and translation.
Having said that, such advancements in language model can profoundly impact the future where every written material available will be generated by computers. And what’s worse, these high-quality texts generated by the model are majorly undetectable by the readers. Thus, it became controversial, where authors of the GPT-3 paper warned the users about its malicious use in spam, phasing and fraudulent behaviours like deep fakes.
Sign up for your weekly dose of what's up in emerging technology.
In addition to that, the model also showcased some algorithmic biases, where the generated text for the religion Islam included more violent words than others.
In fact, in a recent tweet, Sam Altman, the founder of Open AI stated how the language model GPT-3 can be a bit of hype and still needs to work on its limitations to reduce its potential misuse.
GPT-3 Can Pose Threats To Disinformation
Apart from consuming a massive amount of energy and impacting the environment, GPT-3 also comes with other challenges. With GPT-3 scraping down the whole internet archive to generate texts, it can heavily pose a threat to disinformation, where it can be used by bad actors to create an endless amount of fake news, spread misinformation amid COVID and carry out phishing scams. This could be easily attributed to the high-quality text generation capability that GPT-3 encompasses, making the texts convincingly human-like.
Fake news indeed is a big concern for the industry, and with the current pandemic outbreak, this human-like text generating AI can bring potential risks of spreading wrong information and creating panic. In fact, OpenAI’s last year’s release of second-generation GPT, which was trained on 8 million internet pages, had already cited fears of its potential misuse. The concerns were such that the company declined initially to release the model to the public. Now, the newer release, which is 100 times more powerful than the last model has more prospects of being used for nefarious purposes.
Case in point — in a recent tweet, an ML enthusiast Mario Klingemann, shared the long-form article — “The Importance Of Being On Twitter,” written by GPT-3. The coherent paragraphs of text make it almost indistinguishable from the synthetic texts.
In another news, Manuel Araoz, the Founder and Advisor of OpenZeppelin has also written an entire article from scratch using GPT-3 on a so-called experiment, using a basic prompt as a guideline. According to Araoz, the model was fed with a short summary along with the title and some tags to generate an almost perfect text.
Such examples would indeed showcase how GPT-3 has astounding capabilities to perform assigned tasks, making it a state-of-the-art language model. However, such a magnitude of capabilities also made it a preferred tool for evildoers to spread unlawful pieces of information. Hence, it would urge readers to be on the edge while reading news articles in future.
Also Read: Are Larger Models Better For Compression
GPT-3 Can Affect Jobs
With the dramatic improvements over GPT-2, the third generation model achieves strong performance on many NLP datasets and delivers accurate results in tasks like translation, generating paragraphs, answering to specific questions, and unscrambling words, making it extremely difficult to distinguish from the materials produced by humans. This, in turn, can cause many jobs obsolete, including journalists, writers, scriptwriters, to name a few.
Furthermore, the beta release of OpenAI’s API has challenged the roles of developers. For example, in recent news, the founder of debuild.co, a startup that helps developers to build apps, Sharif Shameem, has stated how the company has leveraged GPT-3 to write codes.
According to Shameem, the GPT-3 model required only two written samples for it to perform the rest of the task accurately, despite not being trained to produce code. Such democratisation of app development, could, in turn, worry many developers and their advanced skills.
Having said that, many also believe that instead of replacing humans, GPT-3 could become the perfect aid for humans in these professions. Like, Shameem explained to the media that, in future, doctors can “just ask GPT-3 the cause of a certain set of patient’s symptoms to give a reasonable response. However he further adds that it’s still in its nascent stage to understand its full implications on all industries.
Thus, as the model matures with its natural text generation, one way or the other, it will impact human jobs.
With the threat of creating fake news and its impact on human jobs, the king of AI models — GPT-3 is still believed to be not ready for mass use. However, its advanced improvements in performing many tasks have many a time overshadowed its limitations and thus, to achieve the best results, the company should realise its deception and make necessary improvements. Regardless — and for better or worse — GPT-3 is bringing in the future we all have been waiting for.