Any high school student would guess there is a cosine involved when they see an integral of a sine. Regardless of whether the person understands the thought process behind these functions, it does the job for them.
This intuition behind calculus is rarely explored. Though Newton and Leibnitz developed advanced mathematics to solve real-world problems, today most of the schools teach differential equations through semantics.
The linguistic appeal of mathematics might get grades in high school, but in the world of research, this is hysterical.
However, the researchers at Facebook AI have picked this same semantic approach, flipped it over and built a model that would solve advanced mathematical equations.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
The researchers at Facebook have ingeniously reverse-engineered the differential equations to build a dataset large enough for the language model to find the most useful patterns.
This model was able to solve the equations which were not possible with mathematics and Matlab.
Download our Mobile App
Neural Machine Translation To The Rescue
The above tree-like structure shows how an equation is split into its fundamental components — addition, power, subtraction, cosine etc.
Here constants (3 and 2) and variables (x) act as leaves, while operators (such as plus and minus) and functions are the internal nodes that connect the branches of the tree.
Organising expressions in this way provides a language-like syntax for equations — numbers and variables are nouns, while operators act as verbs.
In this approach, an NMT model is made to align the patterns of a given tree-structured problem with its matching solution (also expressed as a tree), similar to matching a sentence in one language with its confirmed translation.
This method allowed the researchers in swapping out sequences of words for sequences of symbols.
Though the hype around AI has made us into believing that it can be applied for almost everything, the reality is complex; at least mathematical equations are.
Neural networks are quite good at making approximations to classify images or text.
But solving complex mathematical equations requires precision rather than an approximation.
The ingenious way in which the researchers leveraged machine translation is as follows:
- Treat complex equations like sentences in a language.
- Break mathematical expressions into a language-like syntax.
- To do this, the symbolic maths was investigated.
- A model was trained to detect patterns in symbolic equations.
- Unpack equations into branches of variables that are compatible with seq2seq models.
Flipping The Problem
Theoretically, using a neural machine translation(NMT) model looks straightforward. However, training such models requires large dataset. The training set should have examples of solving equations restructured as model-readable expression trees.
Building this data set required us to incorporate a range of data cleaning and generation techniques.
To do this, the researchers flipped the translation approach where, instead of generating problems and finding their solutions, they started generating solutions and led to finding their problem.
This led to the creation of millions of integration examples with subsets of integration problems as well as first- and second-order differential equations.
This data was then used to train a seq2seq transformer model with eight attention heads and six layers.
Transformers are models widely used for translation tasks. By determining what comes before a given function, the model was able to predict the solutions for different kinds of equations.
When presented with 5,000 unseen expressions, the model demonstrated, claim the researchers, accuracy of 94 % and 81.2 % respectively, for first- and second-order differential equations.
What Does The Future Hold
Last year, Google researchers used machine learning algorithms to solve partial differential equations that otherwise would take aeons to solve them. There was also this neural ODEs paper from late 2018, which showed promising results. The role of neural networks in the field of mathematics is slowly on the rise and can help researchers in finding solutions faster than usual. These models are made tireless, smart and can be leveraged to expose blind spots in the existing mathematical approaches.
The researchers at Facebook AI believe that their approach has a broader range of applications in logic-based fields such as physics and result in a software toolkit which can assist the researchers to plug and play mathematical equations to find more accurate solutions.