Historically, neural networks have been developed primarily for learning mappings between finite-dimensional Euclidean domains. Neural operators directly learn the mapping from any functional parametric dependence to the solution for partial differential equations (PDEs). As a result, they acquire knowledge of a complete family of PDEs instead of traditional approaches, which solve only one instance of the equation.
Partial differential equations’ potentials
PDEs are a class of mathematical equations that excel in describing the change in space and time, making them ideal for describing the physical phenomena that occur in our universe. PDE can use them to simulate everything from planetary orbits to plate tectonics to air turbulence during the flight, allowing us to do practical tasks such as forecasting seismic activity and designing safe planes. However, these computations are extremely difficult and computationally costly, which is why disciplines that make extensive use of PDEs frequently rely on supercomputers to perform the arithmetic. Hence, artificial intelligence has developed an interest in these equations.
Caltech researchers have created a novel deep-learning strategy for solving PDEs that is far more accurate than prior deep-learning methods. Additionally, it is more generalisable and capable of solving complete PDE families, such as the Navier-Stokes equation for any fluid, without retraining. Finally, because it is 1,000 times faster than standard mathematical formulas, researchers don’t need to use as many supercomputers and can computationally model much larger issues. Rapper MC Hammer even gave the paper a shout-out on Twitter.
How is it done?
Typically, neural networks are trained to approximate functions described in Euclidean space, the standard graph with the x, y, and z axes. However, this time around, the researchers chose to specify the inputs and outputs in Fourier space, a unique sort of graph used to represent wave frequencies. According to Anima Anandkumar, a Caltech professor who directed the research alongside colleagues Andrew Stuart and Kaushik Bhattacharya, their insight from previous work in other domains is that something like air motion can truly be characterised as a combination of wave frequencies. For example, the wind’s overall direction is analogous to a low frequency with extremely long, languid waves at a macro level. In contrast, the micro level’s small eddies are analogous to high frequencies with very short, fast waves.
Source: Fourier neural operator
Advantages of Fourier Neural Operator
Why is this significant? The neural network’s task is substantially simplified because it is far easier to approximate a Fourier function in Fourier space than to wrangle with PDEs in Euclidean space. Significant accuracy and efficiency benefits result from their strategy. Their system obtains a 30% lower error rate while solving Navier-Stokes equations than previous deep-learning methods, in addition to a considerable speed advantage over traditional methods.
The entire idea is incredibly ingenious, and it also increases the generalizability of the strategy. Previously, each type of fluid had to be trained independently. However, as demonstrated by the researchers’ trials, this approach needs to be trained once to manage them all. Though they have not yet attempted to extend this to additional examples, it should also be capable of handling any earth composition when solving PDEs involving seismic activity or any material type when addressing PDEs involving thermal conductivity.
The researchers present a novel neural operator in this study by explicitly parameterising the integral kernel in Fourier space, allowing for an expressive and efficient architecture. The Fourier neural operator is the first machine learning-based approach capable of modelling turbulent flows with zero-shot superresolution. It solves PDE problems up to three orders of magnitude faster than typical PDE solvers. Additionally, it outperforms earlier learning-based algorithms with fixed resolution accuracy.