Your smartphone OS contains more than 10 million lines of code. A million lines of code takes 18000 pages to print which is equal to Tolstoy’s War and Peace put together 14 times!
Though the number of lines of code is not a direct measure of the quality of a developer, it indicates the quantity which has been generated over the years. There is always a simpler, shorter version of the code and also a longer more exhaustive version.
Coding is an art form in its own right only exception is that it doesn’t defy logic. It might have been tough for AI to compete with humans on many creative frontiers. But now efforts are being made to teach AI to code on its own as code can be defined through constraints; a field that AI basks in.
For example, when asked to write a program to find multiples of 2 from a given set of integers, different programmers think about the problem in a different way. Few might go through each step or with knowledge of domain might use some functions. So, coding involves both the understanding words of the question and also how to represent it symbolically before going ahead with it.
Depending on the familiarity of the domain and the complexity of the problem, humans use a flexible combination of recognition of learned patterns and explicit reasoning to solve programming problems.
In their paper titled Learning to Infer Program Sketches, a team from MIT’s computer science department propose a system SKETCHADAPT, to build systems which write code automatically from the kinds of specifications humans can most easily provide, such as examples and natural language instruction.
SketchAdapt, is a model trained on tens of thousands of program examples and learns how to compose short, high-level programs while letting the second set of algorithms find the right sub-programs to fill in the details.
The authors claim that their training algorithm will enable a program synthesis system to learn, without direct supervision, when to rely on pattern recognition and when to perform a symbolic search. The key idea in this work is to allow a system to learn a suitable intermediate sketch representation between a learned neural proposer and a symbolic search mechanism.
Stay ConnectedGet the latest updates and relevant offers by sharing your email.
The system consists of two main components:
- a Sketch generator, and
- a program synthesizer
The sketch generator is parametrized by a recurrent neural network (RNN) and is trained to assign a high probability to sketches which are likely to quickly yield programs satisfying the spec when given to the synthesizer. Whereas, the program synthesizer takes a sketch as a starting point, and performs an explicit symbolic search to “fill in the holes” in order to find a program which satisfies the specifications.
The sketch generator, conditioned on the program spec, outputs a distribution over the sketch. For each candidate sketch, the synthesizer uses symbolic enumeration techniques to search for full candidate programs which are formed by filling in the holes in the sketch.
Schematic overview of the model
With the experiment showing encouraging results, the authors hypothesize that learned integration of different forms of computation is necessary not only for writing code, but also for other complex AI tasks, such as high-level planning, rapid language learning, and sophisticated question answering as well.
The authors in their work have tried to accomplish the following:
- Develop a novel neuro-symbolic program synthesis system, which writes programs from input-output examples and natural language specification by learning a suitable intermediate sketch representation between a neural network sketch generator and a symbolic synthesizer.
- Introduce a novel training objective, which we used to train our system to find suitable sketch representations without explicit supervision.
- Validate the system by demonstrating results in two programming-by-example domains, list processing problems and string transformation problems, and achieve state-of-the-art performance on the AlgoLisp English-to-code test dataset.
The researchers say that their focus is on giving programming tools to people who want them so that they can tell the computer what they want to do, and the computer can write the program.
Know more about this work here
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
I have a master's degree in Robotics and I write about machine learning advancements. email:email@example.com