MITB Banner

Machines Are Getting Better at Coding, Should You Be Worried?

The creator of Auto-GPT, Toran Bruce Richards, believes it has the potential to save humanity from mass job loss caused by automation from closed-source AI.

Share

Listen to this story

In the past, we have all witnessed the trials and tribulations of human coders struggling to code and get the job done without a tussle. But now, picture a world where machines—thanks to the advent of foundational models (GPTx)—are self-sufficient in mastering the art of coding, eliminating bugs, and minimising downtime. 

Guess what? It’s already happening. Here’s Auto-GPT, an open-source, self learning model that has the capabilities of GPT-4, which can develop and manage outputs and more. So, should you be worried? 

Advocating for the model, Andrej Karpathy, the former director of Tesla—who recently returned to OpenAI—believes that the “next frontier of prompt engineering are AutoGPTs”. Karpathy said so while tweeting about the latest version of Auto-GPT, which can write its own code using  GPT-4 and execute python scripts.’ (It also has a voice!)

Founded by Significant Gravitas, a games development company, the Autonomous GPT-4 experiment allows itself to tamp down the bugs, develop and self-improve. The open-sourced model has managed to woo LLM enthusiasts and it is being called a direct disruptive competitor of OpenAI’s flagship model, ‘ChatGPT’.

The model’s developer, Toran Bruce Richards, believes that Auto-GPT has the potential to save humanity from mass job loss caused by automation from closed-source AI. If everyone has access to their own team of autonomous agents, everyone is enabled and complete. Though the model is currently dependent on GPT3 and GPT4, the researchers are looking into implementing GPT4All. Ultimately, one won’t need to read the source code of an LLM to benefit from this.

Karpathy Strikes

Karpathy shared a fascinating insight on the model. He said, unlike humans, GPTs are completely unaware of their own strengths and limitations, including their finite context window and limited mental maths abilities. This can result in occasional unpredictable outcomes. However, by stringing together GPT calls in loops, agents can be created that can perceive, think, and act towards goals defined in English prompts.

For feedback and learning, Karpathy suggested a “reflect” phase, where outcomes are evaluated, rollouts are saved to memory, and loaded to prompts for few-shot learning. This “meta-learning” few-shot path allows for learning on whatever can be crammed into the context window. However, the gradient-based learning path is less straightforward due to a lack of off-the-shelf APIs—such as LoRA finetunes, supervised fine tuning (SFT) and reinforcement learning from human feedback (RLHF) style—which prevent fine tuning on large amounts of experience.

Karpathy believes that much like employees coalescing into organisations to specialise and parallelise work towards shared goals, AutoGPTs might evolve to become AutoOrgs with AutoCEO, AutoCFO, AutoICs, and more.

Embracing AutoGPTs

Within a week of its release, the Auto-GPT repository has already gained popularity with over 8,000 stars. Alongside the release, a flurry of discussions among developer communities were also sparked. While some have lauded its capabilities, others have pointed out that it still requires human intervention for debugging. One user even drew parallels between the model’s coding process and the traditional practice of rubber duck debugging.

Reddit users have offered varied perspectives on the matter. Some have expressed hope that the base models will not be made available to the general public, citing concerns of potential misuse. Conversely, others have argued that not releasing it would make the AI even more dangerous. A potential downside of keeping all development behind closed doors is that the AI could be commandeered by select individuals to monitor and regulate every action of the populace.

A possible solution suggested by a commentator is to make the model available to the public, accompanied by the necessary tools and resources to ensure responsible experimentation. This would allow for proactive measures to be taken by ethical researchers to counter any rogue AI scenarios that may arise. In essence, the commentator expressed the sentiment that “the only way to thwart a malicious AI is through a benevolent AI”. 

If you are unable to set up AutoGPT yourself, but want to try it, this is the thread for you. Post your prompts below and Toran Richards will try out some of the best ones and record the output for you!

Share
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.