MITB Banner

Facebook’s Answer To GPT-3, Textless NLP

GSLM uses the latest breakthroughs in representation learning, allowing it to work directly from raw audio signals, without any text or labels.
Share
Facebook’s Answer To GPT-3, Textless NLP

Facebook recently introduced a generative spoken language model (GSLM) called textless NLP. It is one of the first high-performance NLP models that break free the dependence on text — unlike language models such as RoBERTa, BERT, and GPT-3, which are restricted to languages with very large text datasets.

GSLM uses the latest breakthroughs in representation learning, allowing it to work directly from raw audio signals, without any text or labels. According to Facebook, this opens the door to a new era of textless NLP applications for potentially every language spoken on Earth — even those without significant or limited text datasets. In addition, it enables the development of NLP models that incorporate the full range of expressivity of oral language. 

Check out the code and pretrained models related to textless NLP on GitHub

How is textless NLP different? 

In the past, connecting an NLP application to speech inputs meant that researchers had to first train an automatic speech recognition (ASR) system. It is often a resource-intensive operation as it introduces errors, encodes casual linguistic interactions poorly, and is available for just a handful of languages. With textless NLP, the researchers are making ASR obsolete and work in an end-to-end fashion, from the speech input to speech outputs. 

The baseline GSLM consists of three parts: 

  • An encoder that converts ‘speech’ into ‘discrete units’ that frequently represent recurring sounds in spoken language (S2u) 
  • An autoregressive, unit-based language model that is trained to predict the next discrete unit based on what it has seen before (pseudo-text)
  • A decoder that converts units into speech (u2S)
GSLM architecture (Source: Facebook)

Advantages of Textless NLP 

  • Textless NLP technology opens up the possibility of training models for any spoken language.
  • Because of the rich expressivity of oral languages, textless NLP may work better than using text for training models. The model can capture the full expressivity of oral languages, including nuances and intonations, encode irony, anger, and uncertainty, and use vocalizations like yawning, laughter, mouth clicks, etc. 
  • Researchers can train models on audio-first experiences like podcasts, radio shows, and social audio apps without annotation or training an ASR. It opens up the possibility of a set of applications never seen before, including online expressive translation for multilingual video games, content search, and summarisation from archived audio. 
  • It may help developmental psychologists and speech and language clinicians understand how infants and toddlers learn to speak and to understand how speech is affected by variances in linguistic input available in different languages. 

In terms of use cases, Facebook researchers have developed the first audio-only speech-to-speech translation system. In the coming months, the researchers plan to address textless versions of standard NLP tasks, such as sentiment analysis, document retrieval, summarization, etc. 

Evaluating a Baseline Model 

In the research paper ‘On generative spoken language modelling from raw audio,” Facebook AI researchers tested three SOTA encoders, namely CPC, wav2vec 2.0, and HuBERT, followed by k-means clustering and deduplication (removing successive identical units). Plus, they have used a standard causal ‘transformer’ for language modelling and Tacotron 2, a standard text-to-speech system, as a decoder.  

Further, the researchers trained their encoder and unit-based language model on 6,000 hours of Libri-Light and Librispeech (a large collection of audiobooks), and the decoder on LJspeech and Librispeech. First, the entire stack was trained with self-supervised learning from raw audio, with no text or labels. Second, the language model and text-to-speech entities were trained on pseudo-text derived from that raw audio. 

Comparing these different models, the researchers noticed that they could not analyze the generated pseudo-text because the units do not map one-to-one with letters or phonemes. So instead, they used pretrained ASR to convert the generated audio back to text. It enabled them to measure the intelligibility of the resynthesized audio using phoneme error rate (PER) and the linguistic quality and diversity of the conditional or unconditional generated audio using an area under the curve (AUC) metric. 

PER is a comparison of the phonemes of the original input with the phonemes transcribed by the ASR. On the other hand, AUC is obtained by sampling sentences across a range of ‘temperatures,’ which are defined as the degree of the inventiveness of a language model. The higher the temperature, the more unsteady the model is; the lower the temperature, the more rigid a model.  

Two evaluation metrics, PER and AUC (Source: Facebook)

Observations 

Facebook researchers said that they discovered several things while performing these measurements:

  1. It matters how many ‘discrete units’ the quantizers use: a higher number results in better outcomes at the acoustic level.
  2. There is a similar trend at the linguistic level, but using too many units in certain areas becomes detrimental. 
  3. Different encoders produced very different outcomes (HuBERT provided the best overall result).
  4. Autonomic generation metrics correlate well with people.
  5. These metrics were predicted by ‘faster-to-compute zero-shot’ metrics from the Zero Resource Speech Benchmark. 

For instance, the automatic and human metrics (lower is better) for three encoders (CPC, wav2vec and HuBERT) are shown below, along with comparing LogMel, which are quantized using k-means on three dictionary sizes (50, 100, 200). 

Check out more samples here.  

(Source: Facebook)

Additional research 

In addition to this, Facebook researchers in a paper ‘text-free Prosody-Aware Generative Spoken Language Modeling‘, presented a prosody-aware generative spoken language model (pGSLM). This new model comprises a multi-stream transformer language model (MS-TLM) of speech, represented as a discovered unit and prosodic feature streams, and an adapted HiFi-GAN model converting MS-TLM outputs to waveforms. 

In this study, the researchers have devised a series of metrics for prosody modelling and generation, and re-use metrics from GSLM for content modelling, and also generated natural, meaningful, and coherent speech that gives a spoken prompt. Check out the audio samples here

Wrapping up 

Facebook researchers said that it would continue to apply GSLM to casual and spontaneous speech and dialogue datasets, where text-based methods and ASR struggle most. In addition, the team believes that their GSLM can be an effective method for pretraining downstream tasks trained with few available labelled or annotated data, like spoken summarization, information retrieval tasks, and sentiment analysis.

“Our goal is to leverage the tremendous advantages in expressivity and subtlety of meaning that oral language offers over written languages, which opens up an almost infinite collection of potential data for understanding human thought,” said the team. 

PS: The story was written using a keyboard.
Share
Picture of Amit Raja Naik

Amit Raja Naik

Amit Raja Naik is a seasoned technology journalist who covers everything from data science to machine learning and artificial intelligence for Analytics India Magazine, where he examines the trends, challenges, ideas, and transformations across the industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India