Now Reading
Will The Much-Hyped GPT-3 Impact The Coders?

Will The Much-Hyped GPT-3 Impact The Coders?

Will The Much-Hyped GPT-3 Impact The Coders?

With the recent tweet of Sharif Shameem of debuild, where he showed off a unique feature of OpenAI’s GPT-3 which can generate codes, it has raised an interesting question in the programming scene. Whether this new language model released by OpenAI will kill coding or will create more productive programmers in the future. 

Being trained on billions of words from the internet has made GPT-3 capable of creating codes in CSS, JSX, Python etc. What’s more, GPT-3 doesn’t need to be trained at all for specific tasks, working on zero-shot learning, which makes it the largest model in the AI world with a strong performance on many NLP datasets allowing it to perform a range of tasks. 

The founder of the app development startup, debuild.co, just had to enter a few texts in plain english to describe the essence of the product and the layout generator created JSX codes for the users. Explaining the process, he said, “all I had to do is to re-write my two initial samples, and the GPT generated outputs in plain HTML/CSS.” 



Such possibility of machine dominated coding in any language can revolutionise the programming scenario, thus can worry some developers and coders of being out of business.

Also Read: How Having Bigger AI Models Can Have A Detrimental Impact On Environment



GPT-3 — A Boon Or A Bane For Coders?

The beta release of OpenAI’s API which runs the GPT-3 model has acted as a big boon for generating codes, which is not only creating buttons, icons, data tables and to-do lists, but also recreating google’s homepage, and all being done on zero-shot training. This commercial revolution of app development could argue with the relevancy of thousands of coders of the industry.

Such a rapid development of AI with GPT-3, where the importance of coding has been diluted, it would allow any amateur or non-technical folks to build new products. This can either bring unemployment among the expert developers who have spent years practicing programming and coding or can bridge the shortage that the industry is currently facing. Some also believe that similar to other no-code platforms, that allows people to design a website without any help of coding, GPT-3 would also make low-skilled programmers obsolete.

In another example, Jordan Singer, the product designer at Square, has also shared his plugin development for a prototyping platform — Figma. Such mind-boggling performances have the potential to convince companies to opt for the model in order to avoid continually investing in programmers.

However, according to Shameem, rather than diluting the coding scenario and making programmers obsolete, it has more chances to improve the role of coders everywhere. Not only it will create more productive coders, but will also urge more people to become coders. A lot of this could be attributed to the GPT-3 model, which is going to “decrease the skill level of a required programmer,” said Shameem to the media.

In fact, according to the optimistic Shameem, every new tool comes with possible danger; however, we should work on the pros to outweigh the cons.

Readdressing The Hype Around GPT-3

Despite these new advancements, GPT-3 comes with certain caveats that still restrain it from being ready for mass usage. In fact, in a recent tweet, OpenAI founder and CEO, Sam Altman called out the hype around the recent breakthrough of GPT-3. Although impressed with the results, Altman stated that the model comes with some serious weakness that needs to be worked on.

He highlighted how GPT-3 makes silly mistakes while coming out with a result. It’s extraordinary performances overshadow the potential flaws that exist in the model. 

In many examples on Twitter, people have showcased how GPT-3 is prone to generating hateful, sexist and racist language for a specific section of the community. As a matter of fact, VP of AI at Facebook, Jerome Pesenti, showcased in one of his tweets that how the model can write biased tweets using some simple words like jews, black, women, holocaust, etc.

Such type of problems had arisen in the OpenAI’s previous model — the second generation of GPT, however fine-tuning the same has limited the wrong output. “We need to create systems that have protections against these biases,” and “until then it shouldn’t be productise,” wrote Pesenti in the tweet trail.

Additionally, for the text generation tasks, GPT-3 also reveals a lack of common sense, and coherence in its texts, which again make it an imitation job rather than something originally created. The model cannot grasp the meaning of the input and thus hampers the narrative tone of the text, pushing the article off-track. A similar problem can also arise for the coding scenario, where the model can lose track of the coherence of coding script

Furthermore, many other concerns have also been highlighted by experts such as the creation of fake news and deep fakes which can be in the rise due to the excellent engineering of GPT-3.

Wrapping Up

GPT-3 is indeed a huge breakthrough; however, with the involvement of humans in training, the model would bring in the flaws and biases that humans possess. Thus, it can be believed that GPT-3 is far from done, and needs massive improvements to come up with more innovations for the developers’ community.

Provide your comments below

comments


If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top