Researchers at MIT have built a new programming language for high-performance computing called ‘ A Tensor Language’. ATL allows users to write programs for high-performance computing in an optimal way to speed things up.
Register for Data Engineering Summit 2022
“Everything in our language is aimed at producing either a single number or a tensor,” said Amanda Liu, a second-year PhD student at the MIT Computer Science and Artificial Intelligence Laboratory and one of the creators of ATL. Tensors, in turn, are generalisations of vectors and matrices. Whereas vectors are one-dimensional objects (often represented by individual arrows) and matrices are familiar two-dimensional arrays of numbers, tensors are n-dimensional arrays, which could take the form of a 3x3x3 array, for instance, or something of even higher (or lower) dimensions.
Amanda Liu along with the University of California at Berkeley postdoc Gilbert Louis Bernstein, MIT Associate Professor Adam Chlipala, and MIT Assistant Professor Jonathan Ragan-Kelley created ‘A Tensor Language’.
The ATL project combines two of the main research interests of Ragan-Kelley and Chlipala. Ragan-Kelley has been working on the optimisation of algorithms in the context of high-performance computing. Chlipala, meanwhile, has focused more on the formal (as in mathematically-based) verification of algorithmic optimisations
ATL is the only tensor language with formally verified optimisations that has been tested on a number of small programs.“One of our main goals, looking ahead, is to improve the scalability of ATL, so that it can be used for the larger programs we see in the real world,” Lui added.