Now Reading
What Are Lagrangian Neural Networks

What Are Lagrangian Neural Networks

Ram Sagar

Neural networks can perform well on tasks such as image classification, language translation, and game playing. However, they usually fail to perform well in tasks that need human abstraction. Activities such as catching balls mid-air or juggling multiple balls, which the humans have mastered, need an intuitive understanding of dynamics of how physical bodies behave.

We don’t take time out to calculate the trajectories before hitting the ball. We just know.

Machine learning models lack many basic intuitions about the dynamics of the physical world. A neural network may never grasp human-level abstraction, even after seeing thousands of examples.



The basic problem with neural network models is that they struggle to learn basic symmetries and conservation laws

One solution to this problem is to design neural networks that can learn arbitrary conservation laws. 

Sam Greydanus, a resident at Google Brain, along with his peers, Miles Cranmer, Peter Battaglia, David Spergel, Shirley Ho and Stephen Hoyer has published an interesting paper addressing the above challenges. In the next section, we shall talk briefly about Lagrangian neural networks and why they are significant


W3Schools

Overview Of Lagrangian Neural Networks

A double pendulum

To demonstrate how a Lagrangian denotes the underlying secrets of nature, the authors take the example of a double pendulum. A double pendulum is a pendulum attached to a pendulum. It is famous for representing chaos in moving bodies.

Consider a physical system that has coordinates x_t=(q,q˙). For example, a double pendulum can be defined with the help of the angles it makes and also its angular velocities.

When a double pendulum is set into motion, the coordinates that we have ascribed on the body move from one point to another. This is clear. This is common sense.

But what induces chaos is the possibility of many paths that these coordinates can take between start(x_0) and end(x_1).

According to Lagrangian mechanics, any action can be written as a function of the potential energy (energy by virtue of position) and kinetic energy (by virtue of motion). And, it is expressed as follows:

The S in the above expression has a remarkable property. For all possible paths between x_0 and x_1, there is only one path, which provides a stationary value of S. Besides, that path is the one that nature always takes — the principle of least action.

So, if we understand the laws of conservation of energy, then we can define the action of the body under consideration and eventually predict the path taken by the body with great accuracy.

However, neural networks are poor at understanding loss of energy concepts, and this is where Lagrangian Neural Networks (LNNs) comes into the picture.

via paper by Sam Greydanus et al.,

“If energy is conserved,” they might say, “when I throw a ball upwards, it will return to my hand with the same speed as when it left.”

via Sam Greydanus’ blog

But these common-sense rules can be difficult to learn straight from data. So, a neural network that understands the conservation of energy just from data can have great implications for robotics and reinforcement learning. And, with LNNs, the authors have opened up new avenues of research by binding the underlying principles of nature with man-made artificial neural networks.

What Is The Significance Of LNNs

A French stamp commemorating Lagrange

Joseph-Louis Lagrange, born in 1736, in Italy, is one of the greatest polymaths who have ever lived. By the age 20, he already has published the principle of least action in the dynamics of solid and fluid bodies. He formulated how to find the path taken by bodies no matter how chaotic their movements are. Three centuries later, today, his work still resonates in the most advanced fields such as deep learning. 

Not that there is an expiry date to the fundamentals of mathematics but the fact that those principles can be applied to make neural networks better than before is a fascinating idea. 

The models of the universe are probed for their symmetries while formulating a model. These symmetries correspond to laws of conservation of energy and momentum, which the neural network struggle at learning.

To address these shortcomings, Greydanus and his peers, introduced a class of models called Hamiltonian Neural Networks (HNNs) last year, which can learn these invariant quantities directly from (pixel) data. Now, going a step ahead of HNNs, Lagrangian Neural Networks (LNNs) can learn Lagrangian functions straight from data. This can also be done by HNNs, but unlike HNNs they don’t require canonical coordinates.

The above table pits LNNs against other related work, and we can clearly see that Lagrangian Neural Networks are winning and we have to wait and see what other breakthroughs they have in store for the scientific community.

Know more about Lagrangian Neural Networks here.

Provide your comments below

comments

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top