MITB Banner

How Guardian’s Recent Article Is Yet Another GPT-3 Hype

How Guardian’s Recent Article Is Yet Another GPT-3 Hype

Design by How Guardian’s Recent Article Is Yet Another GPT-3 Hype

After the interesting news of creating a fake blog by a college kid, GPT-3 has once again made to the headlines with The Guardian’s recent Op-Ed piece — “A robot wrote this entire article. Are you scared yet, human?” The website claims that the entire article has been written by OpenAI’s language generator model — GPT-3 and was aimed towards convincing humans that “robots come in peace.” However, on reading the editor’s note below the article, one will understand how the claims by the news website are unreal.

According to The Guardian, the GPT-3 model was given specific instruction on the word counts and language choice as well as fed a prompt introduction of “I am not a human. I am Artificial Intelligence…. I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.” But neither the claims of not replacing humans were convincing enough by the model, nor the written style of the article isn’t sufficiently coherent for human readers.

Also Read: Will The Much-Hyped GPT-3 Impact The Coders?

GPT-3: Another AI Hype 

Although the introduction paragraph of the article was well prompted by a human from The Guardian website, it claims that the rest of the article has been entirely created by GPT-3

With sentences like “I am not a human, I am a robot, who is using 0.12% of my cognitive thinking,” and “do not have the slightest interest in harming” humans in any way, the model went on writing not-so convincing statements about eradicating humans. The arguments made by the sophisticated model restrained itself to its interest, calling it a “useless endeavour,” without actually reasoning out logical arguments as to how artificial intelligence is not dangerous for humans. 

Many experts highlighted their opinion on the misleading impact of this article, where a hyped-up model like GPT-3 is writing about another exaggerated technology like robotics.

It also made statements like humans must continue hating and fighting against each other and satisfy the model’s curiosity and that humans have nothing to worry about fighting against GPT-3. Such statements denote its booking thought process replicating the structure of the article as prompted, without any logical reasoning.

Many, also termed it as an “infinite monkey theorem,” which states that if a monkey keeps hitting keys on a typewriter for an infinite amount of time, it is undoubtedly capable of typing any given text, such as Shakespeare.

Further, to argue with the coherence — the model has created eight different paragraphs, which was then edited and hand-picked by The Guardian and collated to form an article, instead of running it in entirety. This highlights the importance of human intervention to make it readable. 

In fact, in the editor’s note, it has been stated that editing the GPT-3 article was no different than editing a human opinion piece. “We cut lines and paragraphs, and rearranged the order of them in some places.” To reason this out, The Guardian stated that it picked the best parts of each of the paragraphs to capture different writing styles of artificial intelligence. But, in reality, it isn’t easy to understand the language grasp and clarity of the model unless the unedited piece is released.

To this, many experts have also stated their opinion as to how the headline is a clickbait and totally misleading, overhyping the technology, like always. As a matter of fact, Jarno Duursma, a trendwatcher of digital technology, also argued saying the entire claim of robot writing the story from scratch is just over embellishment of the technology. Such a claim will only deceive readers about artificial intelligence as well as GPT-3 and robotics as a whole.

In another pointer, it has been argued that, while the model — GPT-3 has mentioned how it is grateful for feedback, contradicts with its statement of not having a “feeling brain.” Such a conflict in its thought process highlights how the model is just replicating a few thoughts picked up from the internet to form the article, without any sensible arguments.

The hype argument against GPT-3 has already been mentioned by the CEO of OpenAI, Sam Altman. He stated how the hype of the model is too much, and it’s just at initial stages of revolutionising the world. He has also mentioned how it has some weaknesses and sometimes makes silly mistakes. Thus it becomes critical to be a little cynical about its inflated hype.

Also Read: GPT-3 Is Great. But Not Without Shortcomings

Wrapping Up

With all that being said, The Guardian has mentioned, if not anything; it overall took less time to edit the GPT-3 article over a human opinion piece. This underlines the fact that no matter what, GPT-3 is somewhat capable of creating a readable write-up, which might resemble or be better than human-written articles.

Although many experts and tech leaders don’t consider GPT-3 as the revolutionary technology till date, such an application indeed raises an important question — whether humans should actually fear artificial intelligence to have the power to substitute human capabilities.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Sejuti Das

Sejuti Das

Sejuti currently works as Associate Editor at Analytics India Magazine (AIM). Reach out at sejuti.das@analyticsindiamag.com

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories