AI-based coding assistants can surprisingly assist in genetic programming

Program evolution using large language-based perturbation bridges the gap between evolutionary algorithms and those that operate on the level of human thoughts.
Listen to this story

OpenAI, in collaboration with GitHub and Microsoft, released a coding assistant, GitHub Copilot, in 2021. An AI pair programmer, Copilot, is trained on a large corpus of open source codes on GitHub; it takes context from the code being worked on and suggests successive lines of code and function. 

Fast forward two years, OpenAI has now introduced the results of their recent research where large language models that are trained to generate codes could also be used to improve the effectiveness of mutation operators applied to genetic programming. They argue that since these language models are trained on data that includes sequential changes and modifications, they can approximate changes that humans would make.

Evolution through large models

Evolutionary computation is a type of algorithm family that is inspired by biological evolution and is used for global optimisation. In evolutionary computation, a subfield of artificial intelligence, an initial set of solutions is generated and is iteratively updated to remove less desired solutions and introduce small changes.

The rise of deep learning has raised questions about its implications for evolutionary computation. Are they competing paradigms, or are they complementary? In the Evolution through Large Models (ELM) approach, a large language model trained on code can suggest intelligent mutations and facilitate a more effective mutation operator that overcomes the challenges that were faced in the case of evolving programs, the authors of the study note.

The set of samples generated by these large language models can eventually develop a new training set in a novel domain, which can then fine-tune the language model to perform well in a novel data-generation procedure, a new domain. As per the authors, this approach opens new opportunities in the pursuit of open-endedness – which is about searching outside the distribution of previous experience. The field of open-endedness seeks to create algorithmic systems that can produce never-ending solutions – especially in the context of the developments in the field of AI. While the research in open-endedness has been limited to open-ended search and such a focus has led to algorithmic progress, there is growing awareness of the importance of the environment in which these algorithms are applied. On the other hand, the benefits of evolutionary large models reciprocate back to deep learning. 

This approach also increases the generative capabilities of the language model solely through its own generated data. The large language models bootstrap from human knowledge by learning from large datasets to achieve general coding competency. All these factors are very important for genetic programming.

Simply by prompting the large language models to generate changes, these tools can serve as highly sophisticated mutation operators embedded with the evolutionary algorithm. In effect, program evolution using large language-based perturbation bridges the gap between evolutionary algorithms and those that operate on the level of human thoughts. The large language models can be trained to approximate how humans can intentionally change programs, all while being functional.

About the research

Large language models can be further fine-tuned for self-improvement, which ends in a novel technique for iteratively enhancing the performance of evolution through large models. Towards this endeavour, the researchers from OpenAI generated an entire dataset in a novel domain from a single mediocre starting example designed by humans. The domain is Sodarace 2, where two-dimensional ambulating robots of arbitrary morphology are developed for diverse terrains. This domain is cheap to simulate and allows fast iteration. It facilitates a quick assessment of whether a design is successful – both quantitatively and qualitatively.

The Sodaracers are encoded as raw Python programs that output the enumeration of the ambulating robots’ components. By this, it is possible to demonstrate ELM as a form of genetic programming that can operate on a modern programming language directly, with no special provisions needed beyond the existing code-generating large language model.

This approach also demonstrates the ability to generate diverse solutions in a domain or part of the search space where little training data is available for bootstrapping an open-ended process. As per the researchers, this capability has far-reaching implications. 
Read the full paper here.

Download our Mobile App

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR