Meta AI has partnered with the University of Texas to open source three new models based on audio-visual perception that can help improve AR/VR experiences. The release is another step toward the direction Meta has taken to shift to a virtual universe.
The first model, the Visual Acoustic Matching model or AViTAR, can help transform the acoustics in audio clips and make them sound like the target space in a specific image. For instance, an audio clip that sounded like it was recorded in an empty space could be matched with the image of a crowded restaurant and result in audio that sounded like it was in the restaurant.
The second model, called Visually-Informed Dereverberation or VIDA, as the name suggests, performs the opposite function. VIDA uses observed sounds and visual cues to remove reverberations of a certain audio modality only. This model enhances the quality of speech, which also helps automatic speech recognition.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
The third model, VisualVoice, helps separate speech from background noise using audio visual cues.
While there has been considerable research done on creating better visuals, Meta AI intends to also create sound that is just as immersive for users. “Getting spatial audio right is key to delivering a realistic sense of presence in the metaverse,” said Mark Zuckerberg, founder and chief executive of the company. “If you’re at a concert or just talking with friends around a virtual table, a realistic sense of where sound is coming from makes you feel like you’re actually there.”