Now Reading
Top 10 Fun Machine Learning Experiments By Google Released in 2020

Top 10 Fun Machine Learning Experiments By Google Released in 2020

Ambika Choudhury

Experiments with Google is an exciting website where Google developers, as well as others around the globe, creates intuitive experiments based on machine learning and other techniques. The experiments are the projects that push the boundaries of technology, art, design and more. Currently, the website includes more than 1500 experiments. 

Here, we have curated a list of ten such fun machine learning experiments that are released in 2020.

Scroobly

Launch: December

About: Powered by TensorFlow.js, Scroobly uses Facemesh and PoseNet machine learning models to bring doodle to life. The ML system maps live motion to doodles created by users, by updating the animation visible on screen as the users move. Built with Tensorflow.js, Three.js and React.js, Scroobly is a creative tool that can assist users in becoming digital animators without much experience in coding or designing.



Know more here.

Look to Speak

Launch: December

About: Look to Speak is an android app that enables people to use their eyes to select pre-written text phrases and have them spoken aloud. Built with TensorFlow and Android SDK, this machine learning-based application works only if the front-facing camera can see the users’ eyes and the users need to look away from the device to trigger actions. The app also comes preloaded with a certain selection of phrases which can later be edited in the phrasebook. 

Know more here.

Infinite Bad Guy

Launch: November

About: Built with TensorFlow, Infinite Bad Guy is an ‘infinite music video experiment,’ where it allows users to play all the covers of a particular song together. The machine learning keeps all the covers on the same beat and lets the users jump from video to video seamlessly. According to a blog post, this creates endless possible combinations; thus, every play is unique and never similar to the previous one.

Know more here.

BYOTM (Bring Your Own Teachable Machine)

Launch: October

About: BYOTM or Bring Your Own Teachable Machine is an app where one can send text messages to their family and friends using their personalised Teachable Machine speech recogniser. To do so, the users first have to generate an audio model with Teachable Machine trained in two classes, and then upload the model to the cloud. Users can then grab their shareable link, and bring it to BYOTM. Next, the users have to fill in the output fields with the phone numbers and messages that they want to send. When it’s ready, users can click “Start”, and use the trigger words from their model to begin sending messages. The app has been built with Teachable Machine and tensorflow.js.

Know more here.

LipSync by YouTube

Launch: September

About: An AI-powered challenge by YouTube that rates how closely users’ lips sync with the song. To facilitate this, this experiment uses Google’s AI technology along with TensorFlow.js to detect the landmarks on users’ faces using machine learning running in-browser.

Know more here.

Color Hunt

Launch: July 

About: Color Hunt is an experiment that uses a ‘colorization algorithm’ based on Art Palette — another Google’s ML experiment — which finds three to six dominant colours in the painting and decomposes it accordingly. With Color hunt, one can reproduce the palette of some of the famous artists. One just needs to launch the experiment, pick a painting and capture the colours surrounding in order to create any version of the artwork.

Know more here.

Assisted Melody

Launch: July

About: Assisted Melody is an experiment that allows users to create melodies with the help of maestro composers. With Assisted Melody, users can compose their own tune with a virtual piano keyboard and make it sound like Bach’s harmonies. In order to develop this experiment, the developers trained a machine learning model — the Magenta Coconet — on Bach’s chorale cantatas to harmonise short melodies with a Bach twist. It is built with Tone.js, Javascript, Magenta.js, Web Audio API, es6, tensorflow.js, Pixi.js and WebGL.

See Also
PP Score Banner

Know more here.

An Ocean of Books

Launch: July

About: An Ocean of Books is a new way to explore literature using a fantasy map. In this fantasy world, each island represents an author and each city is a book, with a selection of more than 100,000 authors and 140,000 books. To build this fantasy map, the developers calculated the distance between authors, based on their complicated relationship on the web. With these values, the final map is created with the help of a machine learning technique called Uniform Manifold Approximation and Projection (UMAP). The experiment is built with Javascript, es6, WebGL, React Map GL, react.js and Mapbox GL.

Know more here.

Sounds in Space

Launch: July

About: Built with location-based audio augmented reality, Sounds in Space is an experiment for users to curate location-based audio experiences. In this experiment, users have to use a mobile device to place pre-recorded sounds of specific locations, and the experiment will allow the users to navigate that environment, transforming what they hear in space. It is built with ARCore and Unity.

Know more here.

Draw to Art: Shape Edition

Launch: July

About: Draw to Art: Shape Edition is built in collaboration with Google Creative Lab London along with the Google Arts & Culture Lab in Paris, to create a museum experience that uses the latest machine learning technology. Users just need to launch the experiment and doodle something using geometrical shapes; the experiment will then showcase artworks from the collection available on Google Arts & Culture, that is similar to the doodle.

Know more here.

What Do You Think?

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join Our Telegram Group. Be part of an engaging online community. Join Here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top