MITB Banner

Meta AI’s MuAViC Sets New Benchmark for Highly Accurate Speech Translation

MuAViC can deliver superior speech translation in challenging amid noisy environments.
Share
Major highlights from Meta’s Inside the Lab event
Listen to this story

Meta AI has unveiled a new benchmark called MuAViC (Multilingual Audio-Visual Corpus) that incorporates audio-visual learning to achieve highly accurate speech translation, revamping speech translation. 

Based on their previous AI models such as AV-HuBERT and RAVen models that use visual information to improve English speech recognition, through MuAViC, Meta AI has trained its AV-HuBERT model to deliver superior speech translation in challenging amid noisy environments.

The model can effortlessly handle noise, with the visual modality being relied upon more heavily if the audio modality is distorted. The models were tested in noisy and noise-free environments against a top-performing model for speech recognition and X-En speech translation tasks.

Read the full paper here

Training Process

The shortage of adequate training data previously hindered the exploration of audio-visual understanding for speech translation. Compared to audio data alone, gathering and processing audio-video data requires more resources.

The MuAViC is the most extensive multilingual benchmark for audio-video speech recognition because it comprises about 1,200 hours of transcribed data across nine languages.

For speeches in English, the team repurposed the audio-visual data from the LRS3 dataset and aligned it with a machine translation corpus using a text-matching algorithm. They matched the examples with the corresponding target sentences in the machine translation corpus to generate translation labels and employed exact text matching for examples in the development and test sets. For training set examples without matches, they used a machine translation model to obtain pseudo-translation labels.

On the other hand, speeches in non-English languages, Meta used the audio-only data, transcriptions, and text translations from the speech translation dataset. The team obtained video tracks of the original recordings and aligned processed video data with the audio data to create audio-visual data. Although all audio data is transcribed, we only translated a subset of it. To create pseudo-translation labels, they also utilized the same machine translation model as earlier.

The team utilized Meta’s AV-HuBERT design to produce speech recognition and speech translation models that process both audio and video data end-to-end. When given paired audio and video inputs, the model combines their representations into a single space that can be used for either task. Even if one modality is absent, the AV-HuBERT can still process the available data, albeit with reduced efficiency. 

Last week, Meta released LLaMA, a set of foundation language models that range from 7B to 65B parameters, but it got leaked, along with its weights and is now available to download through torrents. Christopher King, a GitHub user, submitted a pull request to the LLaMA GitHub page which included a torrent link to the open model. LLaMA-13B surpasses OpenAI’s GPT-3 (175B) while being over ten times smaller, and LLaMA-65B is comparable to DeepMind’s Chinchilla-70B and Google’s PaLM-540B.

PS: The story was written using a keyboard.
Share
Picture of Shritama Saha

Shritama Saha

Shritama (she/her) is a technology journalist at AIM who is passionate to explore the influence of AI on different domains including fashion, healthcare and banks.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India