Recently, developers from Google’s Magenta introduced a virtual room in the browser known as Lo-Fi player that lets you play with various musical beats of instruments. Lo-Fi is basically a music generating tool which allows you to select and create music of your choice.
In a blog post, the developers of this AI system stated that if anyone has ever listened to the popular Lo-Fi Hip Hop streams while working and at the same time imagined if they were the producer, it will now allow them to create their own music and vibe.
The developers chose the Lo-Fi Hip Hop because it’s a popular genre where the structure of the music is relatively simple. According to them, this limited flexibility assisted in ensuring that the music always makes some sense.
How It Works
In Lo-Fi player, you can build your own custom music room and the music can be played by interacting with elements in the room. You can both listen to music and share the room with others.
Clicking the start button will start playing the music. Once the music starts playing, you can change the music in real-time such as the tune, bass, tempo, etc. by tinkering around with the objects in the room.
Not only can you customise the music, but you can also change other features such as the view outside the window. The view outside the window correlates to the background sound in the music track, and one can easily change both the visuals and the music just by clicking on the window.
According to the developers, the design goal of this AI-based music system is not to replace the existing Lo-Fi Hip Hop producers or streams but bring a prototype for an interactive music piece to the genre in order to help people appreciate the art.
The ML Behind
The developers of this AI system incorporated several music machine learning models developed by the Magenta team in order to help users make the experience more novel and dynamic.
For instance, the TV in the centre of the virtual music room represents MusicVAE. MusicVAE is a machine learning model that allows a user to create palettes for blending and exploring musical scores. One can click on the TV to create new melodies by recombining the existing ones. Besides various melodies like chill, sad, dense, etc., it also includes recurrent neural networks that generate music and serves as an end-to-end primer on how to build a recurrent network in TensorFlow.
The radio beside the TV represents MelodyRNN. MelodyRNN is a machine learning model that applies language modelling to melody generation using an LSTM. The radio in the virtual room is basically a small “automatic loom” that can be used to generate new melodies.
Speaking of machine learning models, the developers stated, “We want to show that something as simple as applying MusicVAE to short melodies can produce pleasing results when done in a creative, fun context.”
Sharing On Youtube
As the second phase of this project, the developers transformed Lo-Fi music player into an interactive YouTube stream that will be running for a few weeks. They further said that rather than clicking on the elements in the room, one could type the commands into the Live Chat.
The commands allow the user to perform various tasks such as change the colour of the room, change the melody, switch the instruments, and more. Every time the beat loops, the system randomly selects comments from the live chat to modify the music.
The developers stated, “This is very much a first attempt at ML-powered interactive YouTube streaming. It’s pretty primitive, but we hope it’s still fun to set up a room and let others modify the music being made.”
Click here to experience the Lo-Fi Virtual Music Room.