Listen to this story
|
OpenAI‘s Whisper was released on Hugging Face Transformers for TensorFlow on Wednesday. With this advancement, users can now run audio transcription and translation in just a few lines of code. Being XLA compatible, the model is trained on 680,000 hours of audio.
In a blog post last month, OpenAI introduced the multilingual, automatic speech-recognition system to approach human-level robustness in English speech-recognition. OpenAI stated that the model’s high accuracy and ease of use will enable developers to add voice interfaces to a wider set of applications.
With the help of Hugging Face Inference Endpoints, users can now deploy Whisper into their own speech transcription service. Users can pick their cloud, region, and instance and start transcribing audio in seconds using the secure and autoscaling production infrastructure.
Amy Roberts, machine learning engineer at Hugging Face announced the news in a LinkedIn post:
Source: LinkedIn
A TensorFlow-only perk for Whisper is that users can use XLA accelerated generation to speed things up. According to the notebook from João Gante, member of the OS team at Hugging Face,
“Whisper is an encoder-decoder auto-regressive model which was trained on audio translation and transcription tasks. Given audio data, the model is able to generate the corresponding text. A log-mel spectrogram is extracted from a raw audio using a Processor, before it is passed to the encoder. The decoder inputs are text tokens, and special tokens such as “<|startoftranscript|>”, “<|transcribe|>” and “<|en|>” are used to specify the desired task and the language of the audio.”
To know more about the XLA accelerated Whisper, check here.