Recently, researchers at DeepMind proposed EATS, an end-to-end adversarial text-to-speech generative model for TTS trained adversarially. EATS operate on either pure text or raw i.e. temporally unaligned phoneme input sequences and produce raw speech waveforms as output.
Researches on text-to-speech systems have shown impressive growth over a few years. Artificial speech synthesis, commonly known as text-to-speech (TTS) includes a number of applications in domains like technology interfaces, accessibility, entertainment, among others.
A text-to-speech (TTS) system exercises natural language text inputs to generate synthetic human-like speech outputs. Typical TTS pipelines include various stages trained as well as designed independently, for instance, text normalisation, aligned linguistic featurisation, raw audio waveform synthesis, among others.
In our new paper [https://t.co/KPLEtjlVr3] we propose EATS: End-to-End Adversarial Text-to-Speech, which allows for speech synthesis directly from text or phonemes without the need for multi-stage training pipelines or additional supervision.
— DeepMind (@DeepMind) June 8, 2020
Audio: https://t.co/iQki8MRfgk pic.twitter.com/h8Ye4FfC0M
According to the researchers, although these pipelines have shown competence in realistic as well as high-fidelity speech synthesis, these modular approaches come with various drawbacks. For instance, the pipelines often require supervision at each stage, necessitating expensive annotations to guide the outputs of every stage, thereby, failing to reap the full potential rewards of data-driven “end-to-end” learning.
Behind EATS
In this research, a neural network (the generator) maps an input sequence of characters to raw audio at 24 kHz. However, the task is indeed challenging because the input and output are not aligned, which means it is unknown which output tokens will correspond to each input token.
In order to address these challenges, the generator is divided into two blocks, which are: –
- Aligner – An aligner maps the unaligned input sequence to representation that is aligned with the output and includes a lower sample rate of 200 Hz.
- Decoder – The decoder upsamples the aligner’s output to the full audio frequency.
The entire generator architecture is a feed-forward convolutional neural network, which makes it well-suited for applications where fast batched inference is important. According to the researchers, the generator is inspired by GAN-TTS, which is a text-to-speech generative adversarial network operating on aligned linguistic features. The researchers further employed the GAN-TTS generator as the decoder in their model, where its input comes from the aligner block.
Dataset Used
The researchers trained all the models on a dataset that includes high-quality recordings of human speech. The speech is performed by various professional voice actors as well as corresponding text. The voice pool includes a total of 69 female and male voices of North American English speakers, and the audio clips contain full sentences of lengths varying from less than 1 to 20 seconds at 24 kHz frequency. Moreover, the individual voices are unevenly distributed, which are accounting to a total of 260.49 hours of recorded speech.
Contributions By The Researchers
According to the researchers, the main contributions of this project are as follows: –
- In this work, the researchers demonstrated that a text-to-speech system can be learnt nearly end-to-end, resulting in high-fidelity natural-sounding speech that is approaching the state-of-the-art TTS systems.
- A fully differentiable and efficient feed-forward aligner architecture predicts the duration of each input token as well as produces an audio-aligned representation.
- The utilisation of flexible dynamic time warping-based prediction losses to enforce alignment with input conditioning while allowing the model to achieve the variability of timing in human speech.
- The overall system gained a mean opinion score (MOS) of 4.083, that can be said as approaching the state-of-the-art from models trained using richer supervisory signals.
Wrapping Up
The researchers stated, “We have presented an adversarial approach to text-to-speech synthesis which can learn from a relatively weak supervisory signal – normalised text or phonemes paired with corresponding speech audio.” They added, “The speech generated by our proposed model matches the given conditioning texts with naturalness approaching the state-of-the-art systems with multi-stage training pipelines or additional supervision.”
Read the paper here.