AI Has Solved A Key Mathematical Puzzle For Understanding Our Reality

It is possible to use partial differential equations to model a wide range of phenomena, including planetary motion and plate tectonics.

Historically, neural networks have been developed primarily for learning mappings between finite-dimensional Euclidean domains. Neural operators directly learn the mapping from any functional parametric dependence to the solution for partial differential equations (PDEs). As a result, they acquire knowledge of a complete family of PDEs instead of traditional approaches, which solve only one instance of the equation.

Partial differential equations’ potentials

PDEs are a class of mathematical equations that excel in describing the change in space and time, making them ideal for describing the physical phenomena that occur in our universe. PDE can use them to simulate everything from planetary orbits to plate tectonics to air turbulence during the flight, allowing us to do practical tasks such as forecasting seismic activity and designing safe planes. However, these computations are extremely difficult and computationally costly, which is why disciplines that make extensive use of PDEs frequently rely on supercomputers to perform the arithmetic. Hence, artificial intelligence has developed an interest in these equations. 

Novel Strategy

Caltech researchers have created a novel deep-learning strategy for solving PDEs that is far more accurate than prior deep-learning methods. Additionally, it is more generalisable and capable of solving complete PDE families, such as the Navier-Stokes equation for any fluid, without retraining. Finally, because it is 1,000 times faster than standard mathematical formulas, researchers don’t need to use as many supercomputers and can computationally model much larger issues. Rapper MC Hammer even gave the paper a shout-out on Twitter.

How is it done?

Typically, neural networks are trained to approximate functions described in Euclidean space, the standard graph with the x, y, and z axes. However, this time around, the researchers chose to specify the inputs and outputs in Fourier space, a unique sort of graph used to represent wave frequencies. According to Anima Anandkumar, a Caltech professor who directed the research alongside colleagues Andrew Stuart and Kaushik Bhattacharya, their insight from previous work in other domains is that something like air motion can truly be characterised as a combination of wave frequencies. For example, the wind’s overall direction is analogous to a low frequency with extremely long, languid waves at a macro level. In contrast, the micro level’s small eddies are analogous to high frequencies with very short, fast waves.

Source: Fourier neural operator

Advantages of Fourier Neural Operator

Why is this significant? The neural network’s task is substantially simplified because it is far easier to approximate a Fourier function in Fourier space than to wrangle with PDEs in Euclidean space. Significant accuracy and efficiency benefits result from their strategy. Their system obtains a 30% lower error rate while solving Navier-Stokes equations than previous deep-learning methods, in addition to a considerable speed advantage over traditional methods.

The entire idea is incredibly ingenious, and it also increases the generalizability of the strategy. Previously, each type of fluid had to be trained independently. However, as demonstrated by the researchers’ trials, this approach needs to be trained once to manage them all. Though they have not yet attempted to extend this to additional examples, it should also be capable of handling any earth composition when solving PDEs involving seismic activity or any material type when addressing PDEs involving thermal conductivity.


The researchers present a novel neural operator in this study by explicitly parameterising the integral kernel in Fourier space, allowing for an expressive and efficient architecture. The Fourier neural operator is the first machine learning-based approach capable of modelling turbulent flows with zero-shot superresolution. It solves PDE problems up to three orders of magnitude faster than typical PDE solvers. Additionally, it outperforms earlier learning-based algorithms with fixed resolution accuracy. 

More Great AIM Stories

Dr. Nivash Jeevanandam
Nivash holds a doctorate in information technology and has been a research associate at a university and a development engineer in the IT industry. Data science and machine learning excite him.

More Stories

Yugesh Verma
10 real-life applications of Genetic Optimization

Genetic algorithms have a variety of applications, and one of the basic applications of genetic algorithms can be the optimization of problems and solutions. We use optimization for finding the best solution to any problem. Optimization using genetic algorithms can be considered genetic optimization

Yugesh Verma
How to Visualize Backpropagation in Neural Networks?

The backpropagation algorithm computes the gradient of the loss function with respect to the weights. these algorithms are complex and visualizing backpropagation algorithms can help us in understanding its procedure in neural network.

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM