MITB Banner

Microsoft Introduces First Bimodal Pre-Trained Model for Natural Language Generation

Share

Illustration by Microsoft Ignite 2017, Orlando, FL

Over these few years, large pre-trained models such as BERT, ELMo, XLNet, among others, have brought significant improvements on almost every natural language processing (NLP) tasks in organisations. Microsoft has been doing a lot of research around natural language processing (NLP) and natural language understanding (NLU) for a few years now.

The Natural Language Processing group at Microsoft focuses on developing efficient algorithms for processing text to design and build software that will analyse, understand, and generate languages that humans use naturally. Recently, the researchers at the tech giant developed a bimodal pre-trained model for natural language (NL) and programming language (PL) like Python, Java, JavaScript, etc., known as CodeBERT.  

About the Model

CodeBERT captures the semantic connection between natural language and programming language and produces general-purpose representations that can broadly support NL-PL understanding tasks such as natural language code search and generation tasks such as code documentation generation. 

The model has been evaluated on two NL-PL applications, including the mentioned natural language code search and code documentation generation by fine-tuning model parameters. Following which, it achieved state-of-the-art performance on both natural language code search and code documentation generation.

How It Works

The researchers developed the model with multi-layer transformer-based neural architecture, which is adopted in a majority of large pre-trained models. In order to make use of both bimodal instances of NL-PL pairs and a large amount of available unimodal codes, CodeBERT is trained with a hybrid objective function, which includes standard masked language modelling (MLM) and replaced token detection (RTD). 

The MLM objective is to predict the original tokens, which are masked out, and the RTD objective is developed for efficiently learning pre-trained models for natural language. The bimodal data refers to parallel data of natural language-code pairs, and unimodal data stands for codes without paired natural language texts and natural language without paired codes.

The researcher developed the model using the same model architecture as RoBERTa-base, developed by Facebook. They further trained CodeBERT using a large dataset from Github code repositories in 6 programming languages, which are Python, Java, JavaScript, PHP, Ruby and Go, where each bimodal data point is an individual function with paired documentation, and each unimodal code is a function without paired documentation. The dataset includes 2.1M bimodal datapoints and 6.4M unimodal codes across the 6 programming languages.

Key-Points of CodeBERT

  • CodeBERT is the first large bimodal pre-trained model for natural language and programming language.
  • This model has outperformed Facebook’s RoBERTa model.
  • CodeBERT provides good initialisation for learning downstream tasks.
  • The model achieved state-of-the-art performance on both natural language code search and code to documentation generation with natural language-programming language applications such as natural language code search, code documentation generation, etc. 
  • CodeBERT performs better than baselines on almost all languages on both NL and PL probing.

Wrapping Up

According to the researchers, CodeBERT, the pre-trained model for programming and natural languages performed better than previous pre-trained models on NL-PL probing. It consistently outperformed RoBERTa. RoBERTa is an optimised method for pretraining self-supervised NLP systems by Facebook. 

Last year, Microsoft open-sourced the intelligent conversation engine: Code and Pre-trained Systems, or Icecaps, which is a toolkit that not only allows researchers and developers to imbue their chatbots with different personas but also to incorporate other natural language processing features that emphasize conversation modelling. 

Read the paper here.

Share
Picture of Ambika Choudhury

Ambika Choudhury

A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.