Yesterday’s science fiction is today’s invention.
Babel Fish, the “oddest thing in the universe”, is a species of fish featured in Douglas Adam’s magnum opus, The Hitchhiker’s Guide to Galaxy. The fish, worn as an earpiece, translates all the languages that ever existed instantly. Babel Fish is no longer the stuff of dreams: Thanks to advances in AI, especially in the NLP domain, many tech giants are in the process of building a universal translator.
Sign up for your weekly dose of what's up in emerging technology.
To that end, Universal Speech Translator was a dominant theme in the Meta’s Inside the Lab event on February 23.
Meta’s universal language translator
“Eliminating language barriers would be profound, making it possible for billions of people to access information online in their native or preferred languages. Advances in MT won’t just help those people who don’t speak one of the languages that dominate the internet today; they’ll also fundamentally change the way people in the world connect and share ideas,” according to Meta’s official blog post.
The company is launching two new projects:
No Language Left Behind: The team will build an advanced AI model that learns languages with only a few examples to train from. The model can power expert-quality translations in hundreds of languages.
Universal Speech Translator: The system will directly translate speech in real-time without the need for conversion to text.
In 2020, Facebook introduced the M2M-100. The company claims it to be the first multilingual machine translation model that can translate between any pair of 100 languages without using English as an intermediate language. In addition, the model was trained on over 2,000 language directions– ten times more than the earlier SOTA English-centric multilingual models.
In 2019, Google introduced Translatron, a direct speech to speech translation model. It uses a sequence-to-sequence algorithm and does not rely on intermediate text representation. The model’s better inference speed helps avoid compounding errors between recognition and translation, and keep the original speaker’s voice and names and pronouns that need not be translated as it is.
The next iteration, Translatron 2, addressed the model’s earlier shortcomings like low-fidelity speech translation and poor performance compared to strong baseline cascade speech-to-speech translation systems. Translatron 2 outperformed its predecessor in terms of translation quality and naturalness of the speech.
In 2017, the tech giant introduced Pixel Buds– Bluetooth earbuds for instant translation between 40 languages. Adam Champy, the then Google product manager, wrote in a company blog, “It’s like you’ve got your own personal translator with you everywhere you go. Say you’re in Little Italy, and you want to order your pasta like a pro. All you have to do is hold down on the right earbud and say, “Help me speak Italian”.
In 2018, Baidu developed an AI system called Simultaneous Translation and Anticipation and Controllable Latency (STACL). Calling it a breakthrough in the natural language processing domain, Baidu said STACL could simultaneously translate two languages. The tool begins translation a few seconds into the speaker’s speech and finishes seconds after the speaker ends the speech.
Based on the ‘wait-k’ model, STACL is modelled after human interpreters. Earlier, Baidu had launched SwiftScribe, a DeepSpeech platform powered web app and TalkType, a dictation-based Android keyboard.
In 2016, Microsoft announced the ‘world’s-first’ personal universal translator. The Microsoft Translator helps people have face-to-face conversations while the tool immediately translates the speech to text. In addition, the application could translate multiple languages at the same time.
As of March 2022, the tool supports 105 languages and 12 speech translation systems that power Skype Translator, Microsoft Translator Apps for iOS and Android, and others.
End game: Babel Fish
In 2021, Turing award-winner Professor Raj Reddy wagered the digital Babel Fish– that translates “all the languages of the world”–would materialise in ten years. His ‘cockeyed techno-optimism’ didn’t sit well with a section of the community. Prof Reddy is a pioneer in computer speech recognition, human-computer interaction, and robotics. He was instrumental in developing continuous speech recognition systems like Hearsay I, Hearsay II, Harpy, and Dragon.