We never thought we would see the day when a neural network generated dance moves for humans! But now, Indian developer Jaison Saji has developed a dance generator using Keras, inspired by Cary Huang’s project. One of the key takeaways is that it has generated a lot of attention for this project is the viral dancing-celebrity videos one can make with it. Clearly, there could be more intended applications with this project.
DanceNet uses a variational autoencoder (VAE), which works with an works encoder, a decoder and a loss function to drum up thousands of single dance pose pictures, then sequentially connect them to produce vigorous dance movements through joint training on long short-term memory (LSTM) and mixture density networks (MDN).
Sign up for your weekly dose of what's up in emerging technology.
According to Synced Review, Saji built a DanceNet encoder model that comprises of three convolutional layers and one fully connected layer.In the next step, he reconstructed the vectors to the original images by integrating one fully connected layer and four convolutional layers with three upsampling layers in the DanceNet decoder model. After the VAE model was trained, the user can sample any latent variable z and feed it to the decoder, and the model will produce a new dance pose image. And as part of his last step, Saji used a combination of LSTM and MDN to produce dance images for choreography. The developer stacked three LSTM layers, followed by a “dropout” treatment to prevent overfitting. The results from LSTM are subsequently inputted into the fully connected layer and the MDN layer to produce a series of dance moves as final outputs. You can access the project here.
Creating Real-time Dance Movement With Neural Networks
It is not the first time neural networks are used to create real-time dance movement. GrooveNet, a generative system can synthesise dance movements for a given audio track in real-time. The GrooveNet paper published in 2017 indicated that the intended application for GrooveNet is a public interactive installation wherein the crowd can share their own music to interact with an avatar.
The artificial neural networks used for this project were factored conditional restricted Boltzmann machines (FCRBM) and recurrent neural networks (RNN), trained on a small dataset of four synchronised music and motion capture recordings of dance movement. The initial results indicated it could be used to train the FCRBM on a small dataset to generate dance movements. The paper further indicated that the model cannot generalise to other music tracks beyond the training data it is fed.
Another interesting project was Dance Dance Convolution published last year which was based on a rhythm-based video game Dance Dance Revolution. According to this research paper, the players carry out a series of steps on a dance platform in synchronisation with music as directed by on-screen stepcharts. This paper essentially introduced the tasks of learning to choreograph. For example, with a raw audio track, the models can produce a new stepchart and it entails two steps — when to place steps and which steps to select. For the step placement task, the researchers combined RNN and CNN to take in spectrograms of low-level audio features to predict steps, conditioned on chart difficulty. For the next step, the step selection processes, the researchers used an LSTM generative model which exponentially outperformed n-gram and fixed-window approaches. Through this neural network, researchers can create many different charts for the same song.
So is dance a study relevant to enterprise artificial intelligence? From learning choreography to an optimal dance neural networks are used to train and control virtual dancers. There are many possible future applications of neural networks for generating dance moves. It can be used to further understand the inherent symmetry of the body.