Facebook AI Releases XLS-R, Self-Supervised Model For Speech Tasks

XLS-R substantively improves upon previous multilingual models by training on nearly ten times more public data in more than twice as many languages. 

Facebook recently announced the release of XLS-R, a new self-supervised model for a variety of speech tasks. XLS-R substantively improves upon previous multilingual models by training on nearly ten times more public data in more than twice as many languages. 

Trained on more than 436,000 hours of publicly available speech recordings, XLS-R is based on wav2vec 2.0, Facebook AI’s approach to self-supervised learning of speech representations, and nearly ten times more hours of speech than the best previous model it released last year, XLSR-53

Utilizing speech data from different sources, ranging from parliamentary proceedings to audiobooks, it has been expanded to 128 different languages, covering nearly two and a half times more languages than its predecessor.

XLS-R was evaluated on four major multilingual speech recognition benchmarks, where it outperformed prior work on most of the 37 languages tested; specifically, it was tried with five languages of BABEL, ten languages of CommonVoice, eight languages of MLS, and the 14 languages of VoxPopuli.

Image Source: Facebook AI

The model was also evaluated for speech translation, where audio recordings were directly translated into another language. Facebook has always been interested in models that can perform multiple tasks, so it simultaneously fine-tuned XLS-R on several different translation directions of the CoVoST-2 benchmark. The result is a single model that can translate between English and up to 21 other languages.

Image Source: Facebook AI

The model leads to very large improvements on low-resource language directions, such as Indonesian-to-English translation, where the accuracy in terms of BLEU doubles on average — a very large step forward in improving translation of spoken language. An increase in the BLEU metric means automatic translations have more overlap with the translations produced by a human tackling the same task.

XLS-R demonstrates that scaling cross-lingual pretraining can further improve performance for low-resource languages. It improves performance for speech recognition and more than doubles the accuracy of foreign-to-English speech translation. XLS-R is an important step toward a single model that can understand speech in many different languages, and it is the largest effort we know of to leverage public data for multilingual pretraining.

More Great AIM Stories

Victor Dey
Victor is an aspiring Data Scientist & is a Master of Science in Data Science & Big Data Analytics. He is a Researcher, a Data Science Influencer and also an Ex-University Football Player. A keen learner of new developments in Data Science and Artificial Intelligence, he is committed to growing the Data Science community.

More Stories

OUR UPCOMING EVENTS

8th April | In-person Conference | Hotel Radisson Blue, Bangalore

Organized by Analytics India Magazine

View Event >>

30th Apr | Virtual conference

Organized by Analytics India Magazine

View Event >>

MORE FROM AIM
Yugesh Verma
All you need to know about Graph Embeddings

Embeddings can be the subgroups of a group, similarly, in graph theory embedding of a graph can be considered as a representation of a graph on a surface, where points of that surface are made up of vertices and arcs are made up of edges

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM