Microsoft Introduces First Bimodal Pre-Trained Model for Natural Language Generation

Over these few years, large pre-trained models such as BERT, ELMo, XLNet, among others, have brought significant improvements on almost every natural language processing (NLP) tasks in organisations. Microsoft has been doing a lot of research around natural language processing (NLP) and natural language understanding (NLU) for a few years now.

The Natural Language Processing group at Microsoft focuses on developing efficient algorithms for processing text to design and build software that will analyse, understand, and generate languages that humans use naturally. Recently, the researchers at the tech giant developed a bimodal pre-trained model for natural language (NL) and programming language (PL) like Python, Java, JavaScript, etc., known as CodeBERT.  

About the Model

CodeBERT captures the semantic connection between natural language and programming language and produces general-purpose representations that can broadly support NL-PL understanding tasks such as natural language code search and generation tasks such as code documentation generation. 


Sign up for your weekly dose of what's up in emerging technology.

The model has been evaluated on two NL-PL applications, including the mentioned natural language code search and code documentation generation by fine-tuning model parameters. Following which, it achieved state-of-the-art performance on both natural language code search and code documentation generation.

How It Works

The researchers developed the model with multi-layer transformer-based neural architecture, which is adopted in a majority of large pre-trained models. In order to make use of both bimodal instances of NL-PL pairs and a large amount of available unimodal codes, CodeBERT is trained with a hybrid objective function, which includes standard masked language modelling (MLM) and replaced token detection (RTD). 

The MLM objective is to predict the original tokens, which are masked out, and the RTD objective is developed for efficiently learning pre-trained models for natural language. The bimodal data refers to parallel data of natural language-code pairs, and unimodal data stands for codes without paired natural language texts and natural language without paired codes.

The researcher developed the model using the same model architecture as RoBERTa-base, developed by Facebook. They further trained CodeBERT using a large dataset from Github code repositories in 6 programming languages, which are Python, Java, JavaScript, PHP, Ruby and Go, where each bimodal data point is an individual function with paired documentation, and each unimodal code is a function without paired documentation. The dataset includes 2.1M bimodal datapoints and 6.4M unimodal codes across the 6 programming languages.

Key-Points of CodeBERT

  • CodeBERT is the first large bimodal pre-trained model for natural language and programming language.
  • This model has outperformed Facebook’s RoBERTa model.
  • CodeBERT provides good initialisation for learning downstream tasks.
  • The model achieved state-of-the-art performance on both natural language code search and code to documentation generation with natural language-programming language applications such as natural language code search, code documentation generation, etc. 
  • CodeBERT performs better than baselines on almost all languages on both NL and PL probing.

Wrapping Up

According to the researchers, CodeBERT, the pre-trained model for programming and natural languages performed better than previous pre-trained models on NL-PL probing. It consistently outperformed RoBERTa. RoBERTa is an optimised method for pretraining self-supervised NLP systems by Facebook. 

Last year, Microsoft open-sourced the intelligent conversation engine: Code and Pre-trained Systems, or Icecaps, which is a toolkit that not only allows researchers and developers to imbue their chatbots with different personas but also to incorporate other natural language processing features that emphasize conversation modelling. 

Read the paper here.

More Great AIM Stories

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM