Facebook AI Releases XLS-R, Self-Supervised Model For Speech Tasks

XLS-R substantively improves upon previous multilingual models by training on nearly ten times more public data in more than twice as many languages. 

Facebook recently announced the release of XLS-R, a new self-supervised model for a variety of speech tasks. XLS-R substantively improves upon previous multilingual models by training on nearly ten times more public data in more than twice as many languages. 

Trained on more than 436,000 hours of publicly available speech recordings, XLS-R is based on wav2vec 2.0, Facebook AI’s approach to self-supervised learning of speech representations, and nearly ten times more hours of speech than the best previous model it released last year, XLSR-53

Utilizing speech data from different sources, ranging from parliamentary proceedings to audiobooks, it has been expanded to 128 different languages, covering nearly two and a half times more languages than its predecessor.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

XLS-R was evaluated on four major multilingual speech recognition benchmarks, where it outperformed prior work on most of the 37 languages tested; specifically, it was tried with five languages of BABEL, ten languages of CommonVoice, eight languages of MLS, and the 14 languages of VoxPopuli.

Image Source: Facebook AI

The model was also evaluated for speech translation, where audio recordings were directly translated into another language. Facebook has always been interested in models that can perform multiple tasks, so it simultaneously fine-tuned XLS-R on several different translation directions of the CoVoST-2 benchmark. The result is a single model that can translate between English and up to 21 other languages.

Image Source: Facebook AI

The model leads to very large improvements on low-resource language directions, such as Indonesian-to-English translation, where the accuracy in terms of BLEU doubles on average — a very large step forward in improving translation of spoken language. An increase in the BLEU metric means automatic translations have more overlap with the translations produced by a human tackling the same task.

XLS-R demonstrates that scaling cross-lingual pretraining can further improve performance for low-resource languages. It improves performance for speech recognition and more than doubles the accuracy of foreign-to-English speech translation. XLS-R is an important step toward a single model that can understand speech in many different languages, and it is the largest effort we know of to leverage public data for multilingual pretraining.

More Great AIM Stories

Victor Dey
Victor is an aspiring Data Scientist & is a Master of Science in Data Science & Big Data Analytics. He is a Researcher, a Data Science Influencer and also an Ex-University Football Player. A keen learner of new developments in Data Science and Artificial Intelligence, he is committed to growing the Data Science community.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM