Now Reading
Google Comes One Step Closer To Replicating Human Movement With TossingBot

Google Comes One Step Closer To Replicating Human Movement With TossingBot

Ram Sagar

A little experience of getting the ball into a hoop would be enough to imagine the complexity involved with it, more so for an amateur. NBA superstar Kobe Bryant took or made more than 800 shots a day throughout his career to keep his skills intact.

A lot goes into manipulating an object into reaching a certain destination especially when it has been unhinged from the source. There is a drag of air, the weight of the object and other such factors which a person trains over and over again to get a feel for the power required to impart on to the object.

Hard coding a robot to perform all the above skills even poorly, will take a lot of computational heavy lifting and some ingenious constraint assumption to make the robot perform decently when put under unstructured, real-world situations.

Asking a robot to run, do a cartwheel or throw a pitch would have sounded like a chapter from a generic sci-fi novel a few years ago. Now, with the advancement of hardware acceleration and the optimisation of machine learning algorithms, techniques like reinforcement learning are now being put into practical use.

The machines of the modern world can now be taught how to learn- adapt and improvise with great tact.

When The Robot Figures Out The Intuition

AI researchers from Google, Columbia University and MIT teach a new skill to the robots- tossing things. This TossingBot can pick things of different shapes and sizes and gently throw them into a target location like a fruit into a basket or a banana peel into a trashcan.

The joints of a robot can have only so many degrees of freedom. And, to achieve skills like tossing, the synergies between grasping and throwing have to figure out.

To do this, the researchers used a physical simulator, which is modelled on control parameters and then tried to improve the activity with the help of deep learning.

Source: Google AI blog

An overhead camera collects the information regarding how far the object is and then feeds this into a neural network. The other feature, say aerodynamics or projectile movement is also fed into the network from the training on the simulator. The results obtained from processing these features are then passed over to two other networks- one each for grasping and throwing.

The images captured from the onboard camera are continuously fed into the neural networks for extracting features based on pixel-depth and visualising the density of these features in a given space.

See Also

This integration of physics with deep learning enables faster learning in changing environments. The physics simulator trains the bot about the workings of the real world. This training is then generalised for new scenarios.

The researchers named this symbiosis as Residual Physics, as the name suggests, the bot, if trained using the laws of projectile ballistics, it can then be leveraged to learn an estimation of the target area or by how much it is missing the target. Minimising the miss is where deep learning comes in.

The success of this experiment indicates that a machine can learn object-level semantics from its interactions with the physical world; in other words more human-like. This is another huge leap towards the realisation of AGI(artificial general intelligence).

Kobe Bryant took 1,000 shots per day throughout his 20-year-long career, whereas the TossingBot required 10,000 shots (14 hours) to achieve 85% accuracy — a skill it won’t ever forget.

With robots developing an intuition of physics, maybe we shall see robots tossing ring buoys or even learn few parkour tricks.

Read more about the TossingBot here

Also Watch:


What Do You Think?

If you loved this story, do join our Telegram Community.

Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top