Advertisement

Microsoft Introduces First Bimodal Pre-Trained Model for Natural Language Generation

Over these few years, large pre-trained models such as BERT, ELMo, XLNet, among others, have brought significant improvements on almost every natural language processing (NLP) tasks in organisations. Microsoft has been doing a lot of research around natural language processing (NLP) and natural language understanding (NLU) for a few years now.

The Natural Language Processing group at Microsoft focuses on developing efficient algorithms for processing text to design and build software that will analyse, understand, and generate languages that humans use naturally. Recently, the researchers at the tech giant developed a bimodal pre-trained model for natural language (NL) and programming language (PL) like Python, Java, JavaScript, etc., known as CodeBERT.  

About the Model

CodeBERT captures the semantic connection between natural language and programming language and produces general-purpose representations that can broadly support NL-PL understanding tasks such as natural language code search and generation tasks such as code documentation generation. 

The model has been evaluated on two NL-PL applications, including the mentioned natural language code search and code documentation generation by fine-tuning model parameters. Following which, it achieved state-of-the-art performance on both natural language code search and code documentation generation.

How It Works

The researchers developed the model with multi-layer transformer-based neural architecture, which is adopted in a majority of large pre-trained models. In order to make use of both bimodal instances of NL-PL pairs and a large amount of available unimodal codes, CodeBERT is trained with a hybrid objective function, which includes standard masked language modelling (MLM) and replaced token detection (RTD). 

The MLM objective is to predict the original tokens, which are masked out, and the RTD objective is developed for efficiently learning pre-trained models for natural language. The bimodal data refers to parallel data of natural language-code pairs, and unimodal data stands for codes without paired natural language texts and natural language without paired codes.

The researcher developed the model using the same model architecture as RoBERTa-base, developed by Facebook. They further trained CodeBERT using a large dataset from Github code repositories in 6 programming languages, which are Python, Java, JavaScript, PHP, Ruby and Go, where each bimodal data point is an individual function with paired documentation, and each unimodal code is a function without paired documentation. The dataset includes 2.1M bimodal datapoints and 6.4M unimodal codes across the 6 programming languages.

Key-Points of CodeBERT

  • CodeBERT is the first large bimodal pre-trained model for natural language and programming language.
  • This model has outperformed Facebook’s RoBERTa model.
  • CodeBERT provides good initialisation for learning downstream tasks.
  • The model achieved state-of-the-art performance on both natural language code search and code to documentation generation with natural language-programming language applications such as natural language code search, code documentation generation, etc. 
  • CodeBERT performs better than baselines on almost all languages on both NL and PL probing.

Wrapping Up

According to the researchers, CodeBERT, the pre-trained model for programming and natural languages performed better than previous pre-trained models on NL-PL probing. It consistently outperformed RoBERTa. RoBERTa is an optimised method for pretraining self-supervised NLP systems by Facebook. 

Last year, Microsoft open-sourced the intelligent conversation engine: Code and Pre-trained Systems, or Icecaps, which is a toolkit that not only allows researchers and developers to imbue their chatbots with different personas but also to incorporate other natural language processing features that emphasize conversation modelling. 

Read the paper here.

Download our Mobile App

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Bangalore

Future Ready | Lead the AI Era Summit

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

20th June | Bangalore

Women in Data Science (WiDS) by Intuit India

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR